1 year ago
In our blog on Habré we not only tell about development of a cloud service 1cloud, but also we write about new technologies much, including in the field of processors and memory. Today we present to yours adapted translation of a note of the engineer of the Altera company Ron Wilson how development of technology a flash memory can change structure of the data-centers existing nowadays.
Perhaps, there comes the turning point in the history of data-centers. Already for many years tanks a flash memory increase that allows us to expand constantly possibilities of our phones, cameras and mediaplayers.
Emergence of new solid-state drives with use a flash memory allowed to begin development of more modern tablets and notebooks. Such drives began to appear also in data-centers.
Imperceptibly for us, the speed of emergence of new technologies sharply increased. NAND flash memory with new vertical architecture exceeded characteristics planar a flash memory, having allowed to store considerable data volumes in each crystal. New non-volatile technologies, for example memory of Intel-Micron 3D Xpoint, already begin to come to mass production.
You should not forget also about hard drives which technologies also continue to develop. Result of all this will become unprecedented change of structure of data-centers which will begin with hyper scalable clouds, but will gradually reach also the enterprises.
1 year ago
Today there is a set of applications and program complexes from different developers which we use for a solution of the general tasks. Data exchange and interaction between applications provide web services. For testing of their work, interaction debugging among themselves and client applications the set of tools is also released. The most popular of them – SoapUI: it supports SOAP/WSDL, REST, HTTP(S), JDBS, JMS and possesses a tool kit which allow to make testing simpler and more evident. SoapUI acts as test service and the test client and allows to test integration of subsystems. It is possible to get acquainted with the tool on the official site of the developer in more detail.
If for a solution of an objective one computer and a complex of applications is used, breakdown of the PC or failure of one of programs comes to light quickly. But what to do when in the organization there are a lot of technical means and software products? Physically difficult and very costly to monitor and each free minute to check, whether all as it should be. For a solution of this task specialized program systems which you will find much in the Internet come to the rescue: one of them – Zabbix.
In our blog we write about development of the cloudy project of revolution of containers". The sensation around containers of the last time raised one important issue: how this technology will be able to get on with traditional options of management of infrastructure and what threats it conceals for the market of virtualization? And even more specific: whether virtual computers will replace containers?
At the annual conference in San Francisco which took place on in September, 2015, VMware let know unambiguously that it will not occur. The new platform of management enters new type of virtualization — for containers.
The author of article is Mikhail Komarov, MVP — Cloud and Datacenter Management
Good afternoon! The purpose of today's article — to tell about implementation of the enclosed virtualization on the Hyper-V platform. It is no secret that Hyper-V did not support the enclosed virtualization unlike other vendors. With an output of assembly of Windows Server 2016 Technical Preview 4 (TP4) which is intended for persons interested to try new functionality the situation changed. Demonstrations of the enclosed virtualization can be seen in record of the report "One report, one notebook, one data-center" of action of Microsoft TechDay 2015.
Would like to congratulate all habrovchan with coming New Year and Christmas! Behind a window of our Moscow office in the heat winter: it is a lot of pools and +8 on a thermometer. Year comes to an end, and it is time to sum up the results. We decided to try to remember what appeared such interesting in 2015 in the world of IT. And as the world of IT is huge and to remember everything — a task extremely difficult (is a match unless for Schwarzenegger), narrowed the look and stopped on the following companies: Cisco, HPE, Microsoft and VMware. As it appeared (who actually would doubt) if to try to tell about all innovations of the companies stated above, it will be necessary to try very strongly. But soon holidays, it is also necessary to buy gifts, to go to cut a fir-tree, to begin to cook Russian salad. Therefore we tried to select only what, in our opinion, was the most significant for us and our customers. In any case we so hope.
So, there will be enough lyrics, begin our TOP of solutions/devices of other trifles which we selected, having separated on vendors.
In last article with similar heading, we told and even proved that loukost the hosting of virtual servers (VPS) in Russia is possible. But how the situation with lease of dedicated servers is? Whether it is possible to hand over in Russia servers at the prices Hetzner-and, providing to clients gigabit channels? We decided to carry out small analytics and to try to give the answer to post heading. Also we created a loukost-configurator of dedicated servers based on the equipment which is available for us. What from this left – you learn now.
1 year ago
The majority of modern cloudy automatic telephone exchanges have similar functionality. As a rule, it is a set of voice menus, readdressings of calls, various voice boxes and statistics. This gentleman's set can be met almost at each telephone SaaS. At the same time exists modest, not really noticeable at first sight, but very useful, at the correct application, an option — conference communication. Historically so it developed that conferences were always associated with large business, conference calls and acceptance of important bossy solutions. We, when developing our communication framework, try to go a bit different way and we develop the module of conferences being guided by requirements of small and medium business.
1 year ago
At the beginning of December in London at the European Discover conference the HPE company provided the first results of the Synergy project on creation of solutions for creation of "Arranged" (composable) of IT infrastructure in which it is possible quickly (in several minutes) to unroll as traditional server applications (ERP systems, databases, servers of e-mail MS Exchange, etc.), and the new applications which are specially written for work in the cloudy environment. The Synergy project is the following stage of evolution of convergent infrastructure which department of corporate products of HP used several years in the integrated systems. The arranged infrastructure of HPE is based on three key elements:
— Flexible pools of resources: computing, data storage and network — providing uniform infrastructure for expansion of different operational loads: physical, virtual and containers;
— The program defined logic on the basis of use of templates;
— The unified API interfaces providing simple and convenient integration of the Arranged Infrastructure of HPE with any platforms of management and providing cloud services.
The concept of the arranged infrastructure is implemented in the HPE Synergy system which beginning of deliveries is planned for the second quarter of the next year. Beta testing of Synergy will take place in the first quarter 2016, and participants of the London Discover could see this system in a forum demo zone.
1 year ago
In our blog we write about development of the cloudy project adaptation of interview of professor from Berkeley about development of an artificial intellek and big data. Today we present to yours the story of the former participant of IEEE Rebooting Computing initiative and International Technology Roadmap for Semiconductors Eric Debenediktis about trends of development of the industry of supercomputers and problems which it faces.
The existing technologies already allow to create a supercomputer with a performance in exaFLOPS, or 10^18 "operations with a floating comma per second". The jump to 10 exaFLOPS and will demand radical changes, both technologies, and architecture of computers above.
The engineer of department on promotion of technology solutions of Sandiysky national laboratory (Sandia National Laboratories) in Albuquerque, New Mexico, Eric Debenediktis (Erik DeBenedictis) is engaged in development of strategy which would become the answer to new challenges. In due time he cooperated with IEEE Rebooting Computing initiative and International Technology Roadmap for Semiconductors concerning planning of the future of computers and supercomputers.
In his opinion, today in modeling of more powerful supercomputers there are three possible ways of improvement: work with the range selector, 3-D integration and creation of tailored architecture. He told about all three options at special session "Under Burden of the Law of Moore" at the next SC15 conference (International Conference for High Performance Computing, Networking, Storage and Analysis) which took place on November 15-20 in Austin, Texas.
The story of the engineer about the future of the industry of creation of supercomputers and his thoughts which appeared after communication with colleagues concerning their development is given below.
1 year ago
Information technologies become an integral part of products and services of new IT style in which business expects to receive the requested resources under new applications almost instantly. IT resources, in a paradigm of new requirements from modern applications, have to be selected, used, return and be reused automatically from the general pool of nodes of calculations, storage, and a network.