Very long ago I did not write on Habré – both time was not, and thoughts it was not dense too … But thoughts is a dynamic flow if is that "on an input" — that will appear also "output". And here, watching tendencies in IT, having looked around, thoughts ripened: all of us, infrastrukturshchik, slowly and truly slide in Software Defined Computing, i.e. the paradigm of proprietary iron is replaced by software solutions over conventional components. We have SDN (Software Defined Networking), SDS (Software Defined Storage), and SDC is in fact abstraction based on a hypervisor or containerization …
Today I would like to mention solutions in the field of SDS – and already out of limits of solutions of Microsoft, I wanted to look at third-party partner solutions and to look whether there is life there in general … At once I will pay your attention that I will lead the narration about Windows world, on an old habit and bigger competence, than red-eye story … (smile) The first that came to my mind - it is StarWind. I well remember software solutions of StarWind since Windows Server 2003 (-a kingdom to it binary), then it was the easy and effective way to make inexpensive block storage from the simple server with hard drives, but not to stir up-buy expensive SHD … Everything flows, everything changes – one river not to enter twice. Such functionality appeared also in Windows Server and now an era of clouds outside over time, and the cloud always stretches and stretches over infrastructure – and I remembered about StarWind, I think: "a dayka I will look, can at them is what interesting?." And I did not lose, the solution interesting really is also a name to it StarWind Virtual SAN.
2 years, 3 months ago
The task to organize backup with GUI and that it is direct as at big uncles got to me. Earlier there was rsnapshot and everything worked wonderfully so far amounts did not increase up to hundreds of gigabytes, the websites and databases, hundreds of test platforms. The park of servers increased and it became difficult to manage all this affair. From all available solutions we selected opensors and stopped on bareos as on the most often used if something happens fast to google.
2 years, 3 months ago
We had several places on our tomorrow's backup seminar based on CommVault Simpana. All guests I wait for pleasant bonuses: free test drive of new service and excursion on one of the largest data-centers of Russia — TsODu OST. Here a photo for a priming)
Developing a subject of a backup and recovery on SHD with new architecture, we will consider nuances of work with deduplitsirovanny data in the scenario disaster recovery where SHD with own deduplication are protected, namely as this technology of effective storage can help or prevent to recover data.
2 years, 4 months ago
Every time when the speech comes about backup, arises a set of questions, and the red thread through all these questions looks through cares of reliability. Reliability of recovery, reliability of storage, reliability of creation of backup copies. The good product on backup allows to be selected from situations when reliability questionable. The product, best in a class — allows not to appear in such situation in principle.
Not for nothing there is a popular wisdom "not to store all eggs in one basket". There is a set of examples with not the best endings when backup copies are written on the dying data storage system (DSS), and at times productive data are at all stored on the same SHD, as. Based on operating experience of products of Veeam of 168 000 customers and in order that users did not repeat an error of the colleagues, architects of Veeam advance idea of "ideal architecture of a backup system". In addition, "the ideal architecture" means separation of storage of backup copies for the purposes of a continuity of business and for the purposes of archiving of data on long terms.
There is a rule "3-2-1" which says that it in your infrastructure has to be:
3 copies of data
2 types of carriers for storage of backup copies
1 of copies has to is out of the main site
In the small and medium organizations the main questions, as a rule, arise with the last point — need to have a reserve site. Not at all the site for a katastrofoustoychivost is constructed or leased. Occasionally, construction or lease of the place in a data-center, with purchase of the equipment, a service fee and the accompanying expenses, is inexpedient economically.
In that case, it is a high time to address cloud computing which allow to reduce significantly as time of the organization of a notorious reserve site, and it is essential to reduce costs for its creation.
2 years, 4 months ago
There is a great quantity of posts in which persistently call for one simple truth – it is necessary to do backups on a permanent basis. But people will always be divided into two categories: who does not do backups yet, and who already does them. The first category which scorns such opinion it is often possible to meet at profile forums with approximately identical questions:
– at me disks / kto-to departed deleted my base … as to me to recover my data? – you have a fresh backup? – no
Not to become the hero of such situation, it is necessary to spend a minimum of efforts. First, to select the disk array on which to put backup copies. As to store backups together with the DB files – obviously not our choice. The second … it to create the plan of service for backup of databases.
What we will also make further, and later we will discuss some subtleties connected with backups.
Certainly, data protection methods from loss are defined both by the information volume, and the device on which they are stored. Both that, and another constantly evolves.
Therefore long ago disputes between supporters of traditional approach to backup and those who look for the new methods of data protection more convenient from the point of view of architecture of the used storage systems are conducted. Now this dispute escalated as recently a variety of types of storage systems sharply grew. Use of some of them demands to change approaches to usual problems of operation and ensuring availability, including backup about which I want to talk here.
2 years, 6 months ago
It happened to configure now to me Akeeba Backup Pro on remote storage of backup copies in Dropbox. And on the course of process it has appeared that Akeeba is only able to litter that Dropbox, and here it is necessary to clean second-hand articles after it manually. But manually — not comme il faut and archives on gigabyte with small. Therefore, it is necessary to get rid somehow from outdate without hands.
So, it is given — full backups aploaditsya in the full folder each three hours. The Mysql bases — in the mysql folder everyone half an hour. So the owner of the site wants, he under this business of Dropbox Pro has paid.
It is necessary — to delete all old full archives, having left on one in day (and that was!), and all backups of Mysql, except the today's.
2 years, 6 months ago
"How many times repeated to the world" that backup as end in itself does not make practical sense – and it has that, of course, if from a backup copy perhaps quickly, correctly and easily to be recovered. Therefore recovery of the physical machine from the backup copy created by means of Veeam Endpoint Backup FREE will become a subject of my today's post, throughout previous. As you, probably, already assumed, options of recovery are closely connected with backup settings: needless to say, to recover the machine entirely it will not turn out if there were zabekaplena, say, only the user folders. Let's consider these options in more detail for what welcome under kat.
2 years, 6 months ago
Not so long ago the most interesting research of "A Large-Scale Study of Flash Memory Failures in the Field" behind authorship of Qiang Wu and Sanjev Kumar from Facebook, and also Justin Meza and Onur Mutlu from Carnegie Mellon Universityhas been published. The main outputs from article with small comments are lower.
Now, when flash drives are very actively used as high-performance replacement to hard drives, reliability plays them more and more important role. Failures of chips can lead to idle times and even data loss. For development of understanding of processes of change of reliability flash memory in actual practice of the loaded project the research provided in the discussed article has been conducted.
Authors have collected extensive statistics in four years of operation of flash drives in Facebook data-centers.
As many for certain know, Facebook long time was the best (and the basic) the client of the Fusuion-IO company (SANdisk is now bought), which has started releasing one of the first PCI-e flash drives.
As a result of the carried-out analysis of collected data, number of interesting outputs has been made: