Lydmaskinen Serverflytning
Lydmaskinen vedligholdelse lørdag kl. 13:00
Well - nu går det løs med vedligeholdelse! Vi er tilbage sidst på dagen
EDIT: Vedligeholdelsen er afsluttet
EDIT: Vedligeholdelsen er afsluttet
Rune Borup :: Producer / Sangskriver / Synth-hvisker @ FishCorp
Well .. vi har også god fornøjelse ud af at fifle med det! Jeg har f.eks. lært en masse om ZFS i dag (og hvor irriterende det er!). Resultatet af indsatsen i dag betyder, at vi kan flytte serveren (læs: virtual-machine serveren) rundt mellem de to fysiske servere med et klik, så hvis der lige er brug for at reboote den ene, kan man bare flytte tingene over på den anden, osv - vel at mærke uden at nogen nedetid overhovedet!
Alt i alt skulle det meget gerne minimere nedetid fremadrettet - men den her operation i dag var noget som nemt kunne gå galt (og det skabte sig også en masse undervejs), så derfor lukkede vi lige butikken for en kort bemærkning!
Alt i alt skulle det meget gerne minimere nedetid fremadrettet - men den her operation i dag var noget som nemt kunne gå galt (og det skabte sig også en masse undervejs), så derfor lukkede vi lige butikken for en kort bemærkning!
Rune Borup :: Producer / Sangskriver / Synth-hvisker @ FishCorp
Godt gået. Tak fordi I gider
Aldrig har så stor en idiot haft så lidt forstand på så mange plug-ins
https://soundcloud.com/user-469072941
https://soundcloud.com/user-469072941
- Mr. Soundman
- Forum Donator
- Indlæg: 2968
- Sted: Fyn
- Søren Steinmetz
- Medlem
- Indlæg: 268
- Sted: Gislinge området
Tak for jeres arbejde, og ja ZFS kan være ret drillende.
Næste projekt er så at lave auto failover mellem de to servere
Næste projekt er så at lave auto failover mellem de to servere
Teknisk set kan man vel godt lave failover uden det er HA, så er det bare continuous operation af en anden art. Kan det ikke laves med 2 servere?
Ikke mig bekendt - og jeg kan sådan set godt forstå hvorfor med den (indrømmede begrænsede) viden jeg har. Der skal på alle tidspunkter opretholdes quorum*. Mener faktisk det egentlige krav ikke er 3 enheder, men et ulige antal servere - så der altid kan træffes en flerstemmig beslutning om hvad der skal ske. Hvis en server f.eks. dropper offline - og kommer online igen et øjeblik efter, men "master" serveren er skiftet .. så kan de to maskiner i princippet kommet op at toppes om hvem der er "masteren" uden det er muligt at afgøre definitivt - og så ryger de to maskiner ud af sync! Så altså - der skal som minimum kunne være 2 mod een!
Giver det mening?
*fra wiki: "requirement for a quorum is protection against totally unrepresentative action in the name of the body by an unduly small number of persons."
Rune Borup :: Producer / Sangskriver / Synth-hvisker @ FishCorp
Ja jeg forstår.
Læste det her som siger det samme:
Failover Negotiation and Split-Brain
In a two-node cluster, it is more difficult for the cluster logic to determine what to do if there are communication (network) issues rather than a node failure. If the cluster nodes lose communication with each other, how does a node know whether or not to failover the workloads from the node it cannot communicate with? Typically, this is handled through a cluster witness of some kind. A cluster witness (or multiple witnesses in some cases) is a third point of contact that, in theory, is still contactable by one or both of the nodes and it can arbitrate the cluster status. The witness must live outside the cluster so it becomes one more object to manage in your network in addition to the cluster.
As said, this works “in theory” but in reality, it is more complicated. Unlike a true third node, the cluster witness is not really a fully active member of the cluster and its assessment of the state of the cluster can also be hampered by communication issues. A bad witness implementation could potentially put the cluster into a dreaded split-brain scenario where both nodes begin running all workloads and once this happens, it is a nightmare to recover from. Correctly implementing a good witness/arbitration system for a two-node cluster is complex and this article by Andrew Beekof on clusterlabs.org explains these complexities in more detail if you are interested in diving deeper.
Having a minimum of three nodes can ensure that a cluster always has a quorum of nodes to maintain a healthy active cluster. With two nodes, a quorum doesn’t exist. Without it, it is impossible to reliably determine a course of action that both maximizes availability and prevents data corruption. Nothing is infallible, of course, and even a three-node cluster can be taken offline by network issues and loss of quorum. If that were to happen, however, there are likely problems occurring that are bigger than just the cluster going offline and the probability of getting into a split brain scenario with a three-node cluster is practically zero.
Læste det her som siger det samme:
Failover Negotiation and Split-Brain
In a two-node cluster, it is more difficult for the cluster logic to determine what to do if there are communication (network) issues rather than a node failure. If the cluster nodes lose communication with each other, how does a node know whether or not to failover the workloads from the node it cannot communicate with? Typically, this is handled through a cluster witness of some kind. A cluster witness (or multiple witnesses in some cases) is a third point of contact that, in theory, is still contactable by one or both of the nodes and it can arbitrate the cluster status. The witness must live outside the cluster so it becomes one more object to manage in your network in addition to the cluster.
As said, this works “in theory” but in reality, it is more complicated. Unlike a true third node, the cluster witness is not really a fully active member of the cluster and its assessment of the state of the cluster can also be hampered by communication issues. A bad witness implementation could potentially put the cluster into a dreaded split-brain scenario where both nodes begin running all workloads and once this happens, it is a nightmare to recover from. Correctly implementing a good witness/arbitration system for a two-node cluster is complex and this article by Andrew Beekof on clusterlabs.org explains these complexities in more detail if you are interested in diving deeper.
Having a minimum of three nodes can ensure that a cluster always has a quorum of nodes to maintain a healthy active cluster. With two nodes, a quorum doesn’t exist. Without it, it is impossible to reliably determine a course of action that both maximizes availability and prevents data corruption. Nothing is infallible, of course, and even a three-node cluster can be taken offline by network issues and loss of quorum. If that were to happen, however, there are likely problems occurring that are bigger than just the cluster going offline and the probability of getting into a split brain scenario with a three-node cluster is practically zero.
- AnotherDan
- Forum Donator
- Indlæg: 2687
- Sted: Søborg
High Availability (HA) er et arkitektur begreb, og det vigtigste er hvor man kigger på det fra... Du kan godt have en del (lad os sige database) der er HA men hvor den samlede system løsning ikke er...
Det I laver her er way overkill til behovet, men det er jo sjovt at rode med.
Det I laver her er way overkill til behovet, men det er jo sjovt at rode med.
A fancy tagline goes here...