A platform for the design, exploitation and development of critical IT infrastructures for the work with big data
A new concept has been created for building a platform for design, operating and developing critical IT infrastructures for “big data”, as well as a number of new models, methods and tools for synthesizing and modeling neural network hardware and software structures that automate the functioning of IT infrastructure components, new methods for structured training of neural networks , tools for adapting programs to achieve the efficiency of parallel programming of systems on homogeneous and heterogeneous architectures, models and methods of parallel computing solutions in today's data center critical IT-infrastructures. The further development of a parallel distributed dynamically scalable fault-tolerant system for processing streaming large-scale data has been performed. For designing the classes of the system, an algebraic-algorithmic methodology and tools for automated program generation based on high-level specifications (schemes) of algorithms have been used. Models and methods for efficiently allocating resources and critical IT infrastructures for “big data”, methods and mechanisms for automating important processes: managing performance, storage, defining and providing infrastructure and platform services, allocating resources and workload, models and methods for determining and providing infrastructure and platform services of critical IT infrastructures for "big data", algorithms, methods and tools for the synthesis of neural network hardware and software structures for setting (adaptation) of neural network structures in the information processing rate in the neural network have been obtained. There is implemented a valid prototype of a neural network controller for managing components of the IT infrastructure.