No. 6 (2024)

Published: 2025-03-27

SECTION I. INFORMATION PROCESSING ALGORITHMS

  • ALGORITHM FOR CLASSIFICATION OF FIRE HAZARDOUS SITUATIONS BASED ON KOLMOGOROV-ARNOLD NETWORK

    Sanni Singh, A.V. Pribylskiy
    6-15
    Abstract

    The problem of timely and accurate detection of fire hazardous situations is critical to ensure the safety of people and property. Traditional monitoring methods based on simple threshold values for smoke and temperature sensors are often insufficiently effective, as they can lead to false alarms or miss real fire hazardous situations. Modern methods using neural networks can significantly improve the accuracy of classifying an emergency situation by analyzing complex patterns in sensor data, which are complex nonlinear functions with dynamically changing parameters. The development of such models requires attention to the collection, labeling and processing of data, to the choice of neural network architecture for a specific task, because high-quality data labeling and the choice of the desired neural network architecture directly affect the selection of the desired patterns, as well as the detection of hidden patterns that are impossible or difficult to determine by traditional methods. The article examines an algorithm for classifying fire hazardous situations based on the Kolmogorov-Arnold network (KAN). This algorithm is used to process data from a complex of interconnected fire sensors and is designed to detect and classify various types of fire hazardous situations. The key element of the development is the use of the Kolmogorov-Arnold network, which, due to its architecture, is capable of modeling complex functional dependencies between input data. Readings from a complex of interconnected fire sensors, such as temperature and smoke sensors, are used as input data. To improve the accuracy of classification, data is labeled using expert knowledge. The Python programming language was used to implement the algorithm, together with the Pytorch, pykan, and scikit-learn libraries. The article presents the results of testing the model on real data and discusses possible directions for further improvement of the algorithm. During the experiments, it was shown that the proposed model demonstrates high accuracy in classifying fire hazardous situations, which is not inferior to traditional methods of data classification.

  • OVERVIEW AND ANALYSIS OF THREE-DIMENSIONAL PACKAGING FOR MARINE CARGO TRANSPORTATION

    V.V. Kureichik, Y.V. Balyasova, V.V. Bova
    Abstract

    This article describes the problem of three-dimensional packaging of goods in various types of containers
    during maritime cargo transportation. Maritime cargo transportation plays a significant role in
    international trade, is carried out in specific and non-standard conditions, is characterized by increased
    humidity, contact with sea salt, vibration, temperature interference and is carried out by container ships
    transporting various categories of goods in containers selected taking into account the specifics of the
    cargo being transported, which ensures reliability and safety. Of particular importance is the presence of
    protection of goods from a variety of negative and man-made environmental factors, which confirms the
    importance of properly designed marine cargo packaging, ensuring the preservation of goods, equipment,
    raw materials, or materials throughout the entire time of transportation by sea, as well as reliable fastening
    on deck or inside cargo compartments, excluding the possibility of damage to cargo, through exposure
    vibration and static loads. The article describes the task of three-dimensional packaging in containers
    for marine cargo transportation. Criteria and constraints are considered, and a modified combined
    multi-criteria objective function is constructed. Its value should tend to 1, which corresponds to 100%
    filling of voids. Also, the paper provides a brief overview and analysis of methods and algorithms for finding
    solutions to the problem of three-dimensional packaging, their features, advantages and disadvantages
    are revealed. Taking into account the analysis, it is noted that metaheuristic methods and search algorithms
    are effective for solving the NP-complex problem of three-dimensional packaging, as they allow
    obtaining sets of quasi-optimal solutions in polynomial time.

  • ALGORITHM FOR CONSTRUCTING THE ROUTE OF A ROBOTIC COMPLEX USING THE FUZZY LOGIC METHOD

    Е. А. Nazarov, М. Е. Danilin, Е. Y. Kosenko
    Abstract

    This article presents the mathematical justification of a path planning algorithm for a mobile robotic
    complex (MRC) following an operator during autonomous control tasks using artificial intelligence
    (AI). A proposed approach implements a "follow me" autonomous following task for the MRC. A pursuit
    method is selected as the primary method, ensuring the MRC follows the leading operator at a specified
    distance. The MRC's movement simulation is performed in a moving coordinate system to more accurately
    describe the movement of a material point along a curvilinear trajectory. The input data consists of two
    dynamic arrays containing information about the distance from the MRC's camera to the leading operator
    and the course angle between the complex's longitudinal axis and the line of sight. Path planning is performed
    with a delay, after the leading operator has conditionally taken one step away from the robot. The
    introduction of fuzziness in the control process implies evaluating actions and reactions with a set of terms
    that are associated with a certain degree of confidence with specific intervals of physical quantities. Based
    on this approach, an algorithm was developed and implemented in the Python programming environment
    using the Skfuzzy library's built-in fuzzy logic functions. Simulation modeling was conducted to evaluate
    the accuracy of the target function implementation. Analysis of the results revealed the main advantages of
    using fuzzy logic for automation tasks compared to traditional approaches in automatic control theory

  • METHOD OF GENETIC PROGRAMMING FOR SOLVING THE PROBLEM OF OPERATIONAL SCHEDULE PLANNING OF DISCRETE PRODUCTION

    К. О. Obukhov, I.Y. Kvyatkovskaya, А. V. Morozov
    Abstract

    One of the main conditions for the successful functioning of the enterprise is a well-organized production
    planning process. Production planning systems of the APS/MES class, the basis of which are algorithms
    for building production plans, allow automating this activity. The paper examines the problem of
    scheduling for enterprises of a discrete type of production, related to the field of multi-criteria optimization
    problems. A formal description of the planning task is given, taking into account the main production
    constraints (time constraints, equipment requirements and the order of operations). The main methods of
    solving problems of this class are briefly considered; their main advantages and disadvantages are noted.
    To solve this problem, an approach based on the generation of heuristic rules used in planning production
    operations for specified resources has been chosen. Based on this approach, a two-stage algorithm for
    building production schedules is proposed, which includes the generation of dispatching rules and their
    further application in building schedules. A genetic algorithm is responsible for generating dispatch rules.
    The implementation of its genetic operators is described in detail, as well as the composition of the chromosome and the tree representation of the dispatch rules included in the chromosome. The algorithm is
    implemented in C# 12 using a free platform.NET 8. The implemented algorithm has shown its effectiveness
    in comparison with the greedy algorithm on small generated datasets. Further research in this area is
    aimed at evaluating the effectiveness of the constructed algorithm with more complex genetic operators
    and the structure of the expression tree, as well as reducing the duration of the process of generating heuristic
    rules for large data sets.

  • METHOD AND ALGORITHM FOR EXTRACTING FEATURES FROM DIGITAL SIGNALS BASED ON NEURAL NETWORKS TRANSFORMER

    Z.А. Ponimash, М.V. Potanin
    52-64
    Abstract

    Recently, neural network models have become one of the most promising directions in the field of automatic
    feature extraction from digital signals. Traditional approaches, such as statistical, time-domain,
    frequency-domain, and time-frequency analysis, require significant expert knowledge and often prove insufficiently
    effective when dealing with non-stationary and complex signals, such as biomedical signals (ECG,
    EEG, EMG) or industrial signals (e.g., currentgrams). These methods have several limitations when it comes
    to analyzing multichannel data with varying frequency structures or when signal labeling is too laborintensive
    or expensive. Modern neural network architectures, such as transformers, have demonstrated high
    efficiency in automatic feature extraction from complex data. Transformers have outperformed traditional
    convolutional and recurrent neural networks in many key metrics, particularly in tasks involving time series
    forecasting, multimodal data classification, and feature extraction from sequences. Their ability to model
    complex temporal dependencies and nonlinear relationships in data makes them ideal for tasks such as noise
    filtering and multimodal signal processing. This paper proposes a method for feature extraction from digital
    signals based on a modified transformer architecture that incorporates a nonlinear layer after the selfinspection
    module. This approach improved the ability of the model to detect complex and nonlinear dependencies
    in the data, which is particularly important when dealing with biomedical and signals obtained from
    industrial systems. A description of the architecture and the experiments performed are presented, demonstrating
    the high performance of the model in solving signal classification, prediction and filtering problems.
    It is expected that the model can be applied to a wide range of applications including disease and fault
    diagnosis, signal parameter prediction and system modelling.

  • THE TECHNIQUE OF AUTOMATED IMAGE RESTORATION USING CONVOLUTIONAL NEURAL NETWORKS

    G. А. Khrishkevich, D.А. Andreev, L.V. Motaylenko, Y.V. Bruttan, О.N. Timofeeva
    Abstract

    The task of restoring lost fragments of monumental painting is relevant in the context of preserving
    cultural heritage sites. Modern artificial intelligence technologies, including convolutional neural networks
    (CNN), significantly expand the possibilities of restoration, allowing for the automation of complex
    image restoration processes. In particular, the restoration of lost elements of frescoes requires precise
    analysis tools that can predict missing fragments with minimal errors, while preserving the artistic style of
    the original. The purpose of this study is to develop a technique of automated restoration of lost fragments
    of monumental painting images using CNN (using frescoes as an example). This goal was achieved by
    solving the following problems: obtaining fresco images using appropriate methodological and technical
    tools, applying the U-Net architecture for image segmentation and reconstruction, predicting lost areas
    based on color characteristic analysis. The photogrammetry method and the designed device, which were
    used to perform multi-angle shooting, provided high-quality source data for subsequent processing. Adaptation
    of the U-Net architecture to the image segmentation task has proven its effectiveness in identifying
    key structural elements of frescoes, which contributed to the accurate reconstruction of lost areas.
    To predict the lost areas, color characteristics were analyzed in the HSL system, which allowed the CNN
    to predict the missing colors with a high degree of accuracy. Brief conclusions of the study show that the
    proposed technique allows restoring both the shape and color of lost fragments of frescoes. The proposed
    technique is planned to be used for the restoration of other types of art works, which makes it promising
    for further research.

  • AN ALGORITHM FOR FORMING A PROFILED REFLECTOR OF A REFLECTOR ANTENNA IN PROBLEMS OF ELECTRODYNAMIC MODELING

    К. М. Zanin
    Abstract

    When design satellite communication complexes that are placed on board space satellites, it is required
    to ensure a given communication quality within the established service area. The workspace in
    such tasks can have a complex border shape. To cover a given area, on-board antenna systems are used,
    which implement a contour pattern. The quality of communication is directly related to the parameters of
    the main lobe of the directional pattern. The directional pattern should take this factor into account, and
    the main lobe should be as close in shape as possible to the contour of the border of the serviced area.
    One of the possible options for design an antenna system with a contour pattern is the use of a reflector
    antenna. The antenna has a single source and a reflector with a profiled surface. The law of profiling the
    reflector surface is determined by the shape of the boundary of the serviced area. At the antenna design
    stage, it becomes necessary to model and analyze the parameters of the radiation pattern. This requires a
    3D model of a profiled reflector. This 3D model is used as input data for electrodynamic modeling programs.
    The construction of a 3D model consists of solving the equation that describes the reflector and
    forming the results of solving the equation in the form of a solid. The analysis of the published articles
    showed that currently the issues of forming 3D models, taking into account the design features of reflector
    antennas, are not considered in sufficient detail. The goal of the work was to build a 3D model of a profiled
    reflector for electrodynamic modeling, taking into account the features of the construction of reflector
    antennas. To achieve this goal, the task of developing an appropriate algorithm has been solved. In the
    course of the conducted research, an algorithm for forming a profiled reflector has been developed, which
    allows creating an appropriate 3D model that can be used in electrodynamic modeling tasks. The developed
    algorithm converts the results of solving an equation containing information about the shape of the
    reflector into a discontinuous surface on which boundary conditions can be set.

  • METHOD OF MOVING OBJECT POSITIONING WITHOUT USING GLOBAL GEO-REFERENCED DATA

    Е. V. Lishchenko, E.V. Melnik, А. S. Matvienko, А.Y. Budko
    Abstract

    The paper considers the problem of determining the current coordinates of moving object in the
    conditions of unstable signal from the global navigation satellite system (GNSS). The relevance of the
    work is due to the fact that in recent years moving object are increasingly used in virtually all sectors of
    industry, agriculture, transportation, solving a variety of tasks of surveillance, reconnaissance, monitoring
    the state of controlled objects, search and rescue operations, cargo delivery and much more. At the same
    time, the success of flight missions largely depends on how accurately and efficiently its onboard navigation
    system works in real time. The existing solutions for creating onboard positioning systems involve the
    use of inertial and GNSS. However, they have the disadvantage of partial or complete absence of data
    from the GNSS (Global Positioning System). This paper describes a method for maintaining a given accuracy
    of moving object spatial positioning under conditions of partial or complete absence of data from the
    object's GSP. This approach is based on a combination of computer vision methods for processing video
    stream frames from the moving object on-board vision system (OVS) in order to ensure positioning accuracy
    under conditions of partial or complete absence of data from satellite navigation systems. Based on
    the advanced method, an algorithm has been developed for automated determination of moving object
    coordinates in the absence of georeferencing data from global positioning systems (GPS). Experiments
    have been carried out, which demonstrated the reduction of time costs for description and matching of key
    points and improvement of the accuracy of image matching. The developed algorithm was used to solve
    the problem of satellite image matching, which is an important step in the moving object positioning problem
    without the use of global geo-referencing data.

SECTION II. DATA ANALYSIS AND MODELING

  • TRAJECTORY PLANNING SYSTEM FOR THE MOVEMENT OF A DELTA ROBOT FOR AGRICULTURAL PURPOSES

    V.V. Soloviev, А.Y. Nomerchuk, R.К. Filatov
    Abstract

    The aim of this work is to develop a trajectory planning system for the movement of a delta robot
    used for weed cultivation. The delta robot is mounted on a mobile platform that moves between rows of
    cultivated plants. A vision system detects weeds and determines their coordinates. The system is tasked
    with planning the trajectory of the robot's gripper during weed removal, ensuring no damage is done to
    either the robot or the plants. This research is highly relevant due to the growing global population, decreasing
    arable land, rural depopulation, and a reduction in the availability of agricultural machinery.
    To achieve this goal, the work presents a solution to both the forward and inverse kinematics of the delta
    robot using an analytical approach. A model for determining the structural parameters of the delta robot
    is proposed, which allows the evaluation of how these parameters affect the robot’s working area.
    The lengths of the delta robot's arms are determined, tailored to the task of weed removal in corn fields.
    The trajectory planning problem is addressed by decomposing the motion into horizontal movement of the
    gripper and vertical movement, considering the size of soil clumps and the magnitude of weed extraction.
    Experimental results demonstrate the possibility of significantly reducing the number of trajectory points,
    thus lowering the computational complexity of the proposed methods and simplifying their implementation
    in the robot's onboard computer.

  • SYSTEM ANALYSIS AND MODELING OF QUEUE SYSTEMS

    А. А. Bognyukov, D.Y. Zorkin, Е. G. Shvedov
    Abstract

    This article focuses on automation systems used in car dealerships for the sale and repair of vehicles.
    This requires consideration not only of existing processes but also their optimization using modern
    technologies, which complicates the analysis of such systems. The implementation of such solutions can
    lead to the creation of more efficient models that reflect the real operating conditions of car dealerships.
    Understanding the key concepts of automation helps not only to structure the research but also to identify
    directions for further development. Studying existing models and systems allows for the identification of
    best practices and potential shortcomings. Comparative analysis helps not only to adapt proven solutions
    to new conditions but also to avoid mistakes made in previous studies. System analysis and modeling of
    queuing systems represent key aspects in the management and optimization of business processes, including
    such complex areas as automation of the sales and repair process of cars. In the modern world of high
    technology, where competition in the market of goods and services is constantly growing, the use of system
    analysis allows enterprises to find effective solutions to improve their operations. Queuing systems (hereinafter
    referred to as QMS) are a central element in various sectors of the economy, including car dealerships
    and service centers. They are aimed at optimizing customer flows and resources in order to improve
    the quality of service and minimize waiting times. The main task of system analysis in this context is to
    study the structure, behavior and interaction of system components in order to identify weaknesses and
    find optimal strategies to overcome them. To automate the work of the car dealership, a subject area was
    selected, including key elements: staff, customers, cars, services and contracts. These elements are interconnected
    and form the basis for the projected database. Each of the elements has its own essence, and
    their interaction through contracts becomes the basis for the development of a relational database.

  • MATHEMATICAL MODELING OF THE INFLUENCE OF ATMOSPHERIC PRECIPITATION ON HYDROLITHOSPHERIC PROCESSES

    М.А. Georgieva, I.М. Pershin
    Abstract

    This article is devoted to the study of the influence of atmospheric precipitation on
    hydrolithospheric processes using mathematical modeling. Hydrolithospheric processes involve interactions
    between water, the atmosphere, and the Earth's crust, playing an important role in shaping the landscape,
    water cycle, and Earth's climate. Using historical data on precipitation and hydrolithospheric processes,
    the authors calibrate and validate their model. Results show that the model can accurately predict
    changes in water flow, soil erosion, and water quality in response to changes in the precipitation regime.
    This paper presents the development and application of mathematical models to study the effects of precipitation
    on runoff generation, soil erosion, water table changes, and geomorphologic processes occurring
    in the hydrolithosphere. The paper analyzes different types of models, including: – surface runoff
    models, which describe the formation and movement of runoff over the land surface; – soil erosion models,
    which predict the intensity of erosion processes caused by precipitation; – groundwater models, which
    study the effect of precipitation on the water table and its movement in groundwater aquifers; – models of
    geomorphologic processes, which study the influence of precipitation on the formation of relief, formation
    of ravines, slopes and other geomorphologic elements. Problems of model validation and calibration, as
    well as uncertainties associated with precipitation variability, were considered and studied. The results of
    the study provide a better understanding of the interaction of precipitation with the hydrolithosphere and
    present opportunities for using mathematical modeling to predict hydrolithospheric processes and develop
    water management strategies. The article has important implications for understanding and managing
    hydrolithospheric processes in a changing climate. The mathematical model developed in the article can
    be used to assess the potential impacts of changing precipitation amounts and patterns, and to develop
    adaptation strategies to mitigate these impacts.

  • MODELING OF SOCIAL INTERACTIONS BASED ON GRAPH APPROACHES

    Е.R. Zyablova
    Abstract

    The article proposes an approach to modeling social interactions in organizational systems, which
    consists of several stages: obtaining data about system users, for example, using network parsing; forming
    a GH-model of the system based on fuzzy graphs with different types of vertices and multiple different
    types of edges; calculating graph characteristics taking into account a certain type of edges; using values
    of graph characteristics to analyze the system taking into account the inherent semantic load. The expediency
    of using the GH graph for the study of social relations in organizational systems is substantiated,
    since it has a number of advantages. The GH-graph allows you to set all the necessary multi-type relationships
    and at the same time reduce the time of system analysis by 1.9 times by using multiple edges in the
    form of a vector, allowing you to combine several different types of edges. Modification of the model consists
    in using different types of vertices. The type of vertices in the graph is determined by calculating their
    characteristics. The paper shows the process of forming a graph model of a subsystem and calculating its
    characteristics. The results of calculating the degrees of vertices and their centrality by degrees are
    shown. To calculate the metric characteristics of the graph model, a modified algorithm for finding shortest
    paths in the GH-graph was used, which was previously developed. A special feature of this algorithm
    is the ability to use filters based on the type of vertices and edges. Numerical indices of the radius and
    diameter of the graph are obtained, groups of central and peripheral vertices are determined, the centrality
    of vertices in proximity is calculated, taking into account the selected types of edges for the study of
    different types of relations in the system. The analysis of the subsystem is carried out using the example of
    solving two practical problems. Groups of employees of the enterprise were identified among the network
    users, their possible statuses and communicative activities were determined. The user status refers to belonging
    to groups of managers of different levels, a group of ordinary employees of the enterprise. A solution
    to the problem of identifying users (groups of users) most suitable for the dissemination (or, conversely,
    non-proliferation) of information on the network is proposed

  • PREDICTION OF FAULTS IN TECHNICAL SYSTEMS BASED ON THE SIMILARITY MODEL OF THE REMAINING USEFUL LIFE

    Y.А. Korablev
    Abstract

    This paper demonstrates how to construct a complete Remaining Useful Life (RUL) estimation workflow,
    including the steps of preprocessing, selecting trend features, constructing a health indicator by fusing sensors,
    training RUL similarity estimators, and verifying the prediction performance. The method was tested in a
    MATLAB demo program implementing this method for predicting the occurrence of faults in technical systems
    (https://www.mathworks.com/help/predmaint/ug/similarity-based-remaining-useful-life-estimation.html) based
    on data from the "PHM08 Challenge Data Set", NASA Ames Prognostics Data Repository
    (http://ti.arc.nasa.gov/project/prognostic-data-repository), NASA Ames Research Center, Moffett Field, CA. The
    method is focused on the use of reasonable technical characteristics of the equipment being estimated, which are
    sufficiently covered in the reference literature. Therefore, the method gives good results when assessing equipment
    whose operating conditions are close to the statistical average. This paper uses the Predictive Maintenance
    Toolbox™ in MATLAB, which includes several specialized models developed for calculating RUL from various
    types of measured system data. These models are useful when you have historical data and information, such as:
    ‒ failure histories of machines similar to the one to be diagnosed. The historical data for each member of the
    data ensemble is fitted to a model of identical structure; ‒ a known threshold value of some condition indicator
    indicating failure; ‒ data on how much time or how much use it took for similar machines to fail (service life).
    RUL estimation models provide methods for training a model using historical data and using it to make a remaining
    service life prediction. The term service life here refers to the useful life of a machine defined in terms of
    any quantity used to measure the service life of a system. Similarly, time evolution can mean the evolution of a
    value with usage, distance traveled, number of cycles, or another quantity that describes the service life. A general
    workflow for using RUL estimation models is: ‒ create and configure the corresponding model object;
    ‒ train the estimation model using the available historical data; ‒ using test data of the same type as the available
    historical data, estimate the RUL of the test component. It is also possible to use the test data recursively to
    update the model as new data becomes available, i.e. track the evolution of the RUL prediction as new data
    becomes available.

  • DEVELOPMENT OF A CONVOLUTIONAL NEURAL NETWORK TO ASSESS THE SEVERITY OF KNEE OSTEOARTHRITIS

    Mannaa Ali Sajae, G. V. Muratova
    Abstract

    method In this paper, we propose a novel method for the automated assessment of knee osteoarthritis
    severity, utilizing advanced machine learning techniques, specifically a deep neural network. Osteoarthritis
    is one of the most prevalent degenerative joint diseases, and its timely diagnosis is crucial for
    ensuring effective treatment. Traditional methods for visually assessing X-ray images of the knee joint
    present several limitations, including subjectivity and reliance on the experience of the clinician. Therefore,
    the development of automated medical image analysis techniques has become increasingly relevant.
    Osteoarthritis of the knee joint is one of the most common and severe degenerative diseases leading to a
    significant decrease in the quality of life of patients. Traditional methods of diagnosing osteoarthritis,
    such as visual assessment of X-ray images, depend on the subjective opinion of a specialist and his experience,
    which can lead to variations in the accuracy of diagnosis and timely detection of pathology. Therefore,
    the development and implementation of methods for automated analysis of medical images is highly
    relevant and has potential clinical value. In this study, we designed and trained a specialized neural network
    based on the ResNet-34 architecture, which has demonstrated significant effectiveness in solving
    computer vision problems. The network was modified to incorporate two parallel branches, each contain
    ing a spiral linear structure and four hidden layers. This design enables more precise identification of the
    knee joint area. Additionally, the architecture facilitates optimization of the loss function to account for
    varying pathological characteristics, such as different degrees of joint degradation, and to address the
    issue of class imbalance—a common challenge in medical imaging datasets. To further enhance model
    performance, the neural network was trained on two distinct datasets stratified by gender (male and female).
    This approach improved overall image quality and reduced the impact of noise introduced by artifacts
    during radiographic imaging. Moreover, we employed the ImagePixelSpacing technique during data
    preparation to standardize image resolution at 256 × 256 pixels, allowing for more accurate processing
    of fine details and structures within the knee joint. The network training employed state-of-the-art optimization
    techniques, resulting in a high level of classification accuracy. To evaluate the effectiveness of the
    proposed model, the Kappa test was utilized, confirming the reliability of baseline determinations.
    The model achieved an average accuracy of 93.76%, as demonstrated by the multiclass T-test, indicating
    its strong potential for clinical application. Additionally, the model’s area under the curve (AUC) score
    was 0.97, surpassing the results reported in previous studies in this domain. In conclusion, this research
    contributes significantly to the field of medical informatics and computer-based medical image analysis by
    offering an innovative solution for the automated assessment of osteoarthritis. This method has the potential
    to profoundly improve diagnostic accuracy and treatment outcomes in clinical settings. In addition,
    these results demonstrate the potential of the model as a reliable tool for automated assessment of the
    degree of osteoarthritis, which can not only improve the accuracy of diagnosis, but also facilitate the work
    of medical specialists. Further research may include adapting the model to analyze other joints and integrating
    additional functionality, such as predicting disease progression based on sequential scans.

SECTION III. COMPUTING AND INFORMATION MANAGEMENT SYSTEMS

  • ABOUT THE REAL POSSIBILITIES OF MODERN COMPUTING SYSTEMS FOR DISTRIBUTED MULTIPLICATION OF LARGE-DIMENSIONAL MATRICES

    V.М. Glushan, L.А. Popov, А.А. Tselykh
    Abstract

    The needs of practice constantly require improving the performance of computing systems. For
    quite a long time, multiprocessor systems have been the main way to build ultra-high performance computing
    systems. When creating such systems, many difficult problems arise. They are related to the need to
    parallelize the computing process in order to efficiently load the system processors, overcome conflicts
    when several processors try to use the same system resource, reduce the impact of conflicts on system
    performance, etc. With microelectronics overcoming the milestone of a billion transistors on a silicon
    chip, a new paradigm of multicore processors has emerged. At the same time, the problem of the ratio of
    multicore and multithreading in modern computers arose. This is due to the dilemma of preference between
    them. A multicore processor contains two or more electronic computing cores placed on a single
    semiconductor crystal. Each core of a multicore processor is a full-fledged microprocessor. Multicore is
    an obvious and traditional method of distributed solution of many complex tasks. But this cannot be said
    about multithreading, which relies on the use of very fast cache memory associated with the main memory
    and serves to reduce the average access time to the main memory of the processor. The relative novelty of
    modern approaches to the construction of computing systems requires comparative experimental studies
    of their capabilities. A promising and convenient mathematical object for these purposes is the distributed
    multiplication of matrices of large dimensions. The article presents practical results of distributed multiplication
    of square matrices with sizes from 300*300 to 2000*2000 and randomly generated values of
    elements in the matrices in the range from -100 to +100. Based on the experimental data presented in the
    corresponding tables and graphs, hyperbolic relations are obtained for the dependence of the matrix multiplication
    time on the number of virtual machines (cores) in the laptop used. Similar results were obtained
    by multiplying square matrices on single-processor computers connected to a local network. Analytical
    expressions in this case also represent hyperbolic time dependencies. But the numerical values in
    them significantly exceed those for the hyperbolic formula obtained for the laptop. Based on the results
    obtained, the conducted research allows us to conclude that the use of a single-processor computer connected
    to a local network for multiplying matrices of large dimensions is inferior to the performance of a
    laptop. This is due to the significant time spent moving data over the local network.

  • DECENTRALIZED CONTROL OF A GROUP OF AUTONOMOUS MOBILE OBJECTS WHEN FORMING A TRAJECTORY OF MOVEMENT

    B.К. Lebedev, О.B. Lebedev, М. I. Beskhmelnov
    Abstract

    The article considers algorithms for generating unmanned aerial vehicles motion trajectories during
    search and rescue and liquidation operations. The methods and algorithms for controlling the motion of a
    unmanned aerial vehicles group in formation, when deployed in a line, when deployed in a rank, when
    turning, in a column are described. Control is carried out using alternative collective adaptation algorithms
    based on the ideas of collective behavior. The operating principles of one adaptation machine are
    considered. The purpose of controlling slave robots is to minimize deviations. To implement the adaptation
    mechanism, the parameters of the vector are matched with adaptation machines that model the behavior
    of adaptation objects in the environment. A structure has been developed for the process of alternative
    collective adaptation of parameters that control the motion of a group of unmanned aerial vehicles in
    formation. Original rules for controlling parameters have been developed that have a number of advantages
    over other methods: complete decentralization of control in combination with dynamic correction
    of robot parameters that set the position and orientation of the robot in an absolute coordinate system,
    and the linear velocity of the robot, respectively. A structure of a maneuver performed by a robot to correct
    parameter deviations is proposed. Control is performed using an alternative collective adaptation algorithm
    based on the ideas of collective behavior of adaptation objects, which allows for efficient processing
    of emergency situations, such as agent failure, changes in the number of agents due to failure or sudden
    acquisition of communication with the next agent, as well as in conditions of measurement errors and
    noise that satisfy certain restrictions.

  • PROBLEM OF MULTI-CRITERIA OPTIMIZATION OF SELECTION OF AN UNPREPARED HELIDROM

    P.G. Ermakov
    Abstract

    The problem of multi-criteria optimization of the choice of an unprepared helidrom to plant the
    unmanned aerial vehicle (UAV) helicopter type on it is considered in this article. The problem of multi -
    criteria optimization of the choice of an unprepared helidrom is formalized based on satisfying requirements
    of the International Civil Aviation Organization (ICAO) to an unprepared helidrom by mi nimizing
    the original loss function taking into account the following data: the probability of availability
    of an unprepared helidrom, the probability of failure of the UAV’s helicopter type onboard system, the
    error of a digital elevation map (DEM) positional information, the error of the UAV’s helicopter type
    coordinates information and the technical characteristics of the UAV helicopter type. It is proposed to
    determine the suitability of an unequipped helidrom based on the maximum height of terrain elements
    of it’s surface using statistical processing of a lidar earth scanning data. The mathematical formulations
    of the problem of decision-making on UAV helicopter type landing are proposed based on requirements
    for an unprepared helidrom in terms of maximum height of terrain elements and soil hardness.
    The comparison of the computational time of algorithms of the choice of an unprepared helidrom
    is completed using Raspberry Pi 3 Model B. The result of a simulation modelling of the proposed opt imal
    algorithm of the choice of an unprepared helidrom for the estimation of its ef ficiency under conditions
    of variability of parameters of the probabilistic loss function using OpenStreetMap and SRTM is
    presented. The result of solving the problem of decision-making on UAV helicopter type landing based
    on a lidar earth scanning data is presented

  • FEATURES OF CONTROL OF LINEAR DRIVES OF A ROBOT WHEN ITS MOVEMENT ON A VERTICAL SURFACE

    А. А. Khachatryan, Е.S. Briskin
    Abstract

    The operation of robots on vertical and close to them surfaces has broad prospects due to the need
    to perform a sufficiently large number of technological operations on them on the one hand and the complexity
    of using manual labor on the other hand. The movement of a mobile robot along a vertical surface
    is considered. The movement of the robot and its retention on the surface is carried out through the operation
    of two linear actuators that exert pressure on it and rely on platforms capable of moving along a horizontal
    surface. The robot and the platform have piano‒type wheels operating in one of two modes – free and brake. At the same time, the braking devices ensure reliable adhesion of the wheels to the corresponding
    surfaces. A design scheme and a mathematical model of a robotic system using the force of linear actuators
    to move the robot along a vertical flat surface are proposed. The problem of the dynamics of the movement
    of a mobile robot has been solved, the movement of which along the working surface is carried out by
    controlling the magnitude and direction of the efforts developed by the actuators and the choice of inhibited
    supports that ensure a stable mode of movement. The process of movement is considered, consisting of three
    stages, at each of which one of the robot's supports is braked, while all the supports of the platforms on the
    horizontal surface are also braked. During the transition between the stages of movement, the mobile robot
    makes a stop before changing the braked wheel, after which movement resumes. The friction forces between
    the disinhibited robot supports and the work surface are neglected. The equations and trajectories of the
    motion of the center of mass of the mobile robot are obtained. The dependences of the lengths of the linear
    drives of the clamping mechanism on the coordinates of the center of mass of the robot are presented. Simulation
    modeling was carried out, as a result of which the ranges of changes in the lengths of linear actuators
    and the forces developed to ensure the required displacement were determined.

  • AUTOMATION OF THE USE OF FALSE COMPONENTS IN THE INFORMATION SYSTEM

    S.А. Smirnov, N.Y. Parotkin, V. V. Zolotarev
    Abstract

    The article considers the applicability of deceptive information systems and their components in
    building an automated system for deploying and managing the applied implementation of deceptive component
    technology to improve the attack prevention system. The main advantages and the role of technology
    in the information security strategy setting the specifics and the area of technology means and tools
    practical appliance are suggested. The article considers the fundamentals of the architecture and features
    of the technology application, as well as its limitations. The purpose and the objective of using the present
    technology is pointed in terms of key principles of implementation disclosure. In addition, regulatory publications
    and other recommendations constituting the best practices in the field of its use were analyzed.
    The concept and architecture of the final automated solution for integration into information systems and
    security systems are considered, and the functional content of the final solution is described. A distinctive
    feature of the proposed solution is the use of controlled containerization mechanisms, that provide ample
    opportunities for scaling the solution and isolating compromised system components as a result of an
    intruder's actions. A formulated process of the automation system practical implementation in perspective
    of solution subsystems is schematically described in relation to dependent components (such as suggested
    document pieces and outer tools and systems) and included operations processing conditions. A model of
    deployment and operation of a distributed automation system is also provided in the following sequence:
    setting up a deployment server (including provisioning), deploying a network of false decoy components
    based on containerization, deploying external baits, integrating with systems and instances of the information
    security stack external to the composition of the solution. The solution is implemented by means of
    the principle: fake assets and resources of the fictive environment are deployed in an information technology
    infrastructure using controls and are intended to be affected by the adversary. The deployed set of
    subsystem tools was tested using a third-party node with the appropriate tools and scanning scenarios.
    Recommendations are given for further improvement of the automation system for deployment and management
    of tools and measures for deceptive component technology.

  • STATE REGULATION OF NAMING AND SOFTWARE IDENTIFICATION IN VULNERABILITY MANAGEMENT PROCESSES

    V.G. Zhukov, S.V. Seligeev
    Abstract

    IT asset management is the foundation for building an effective vulnerability management process.
    Without an understanding of the IT assets under control, it is technically impossible to start building a
    vulnerability management process. With an existing IT asset management process in place, one of the
    tasks that is essential to vulnerability management is to uniquely name software as an asset. This unambiguous
    naming allows the software and its vulnerabilities to be identified without actively scanning IT
    infrastructure nodes, but only by interacting with the IT asset management system. Technically, this approach
    can be called “passive vulnerability detection,” but it is extremely labor-intensive to implement
    using existing naming systems. In order to make the possibility of passive detection more realistic, the
    authors propose to create a common foundation by forming a conceptual scheme and then creating a system
    of standardized naming and identification of software, the regulation of which will be centralized at
    the state level. As part of the review of existing software naming systems, attention is paid to CPE problems
    both on the part of on-site specialists, namely obtaining CPE identifiers and translating software
    information into a CPE identifier, and on the part of a vulnerability data aggregator, namely obtaining
    vulnerability information via a CPE identifier. The problems of CPE application, as well as the problems
    of interaction with vulnerability data aggregators from unfriendly countries, discovered in the course of
    the research form the prerequisites for the formation of a national system for state regulation of software
    naming and identification, which will eliminate the problems of existing software naming systems. In conclusion,
    advantages of the national system of software naming and identification are given in case of its
    creation and use in real conditions by all participants of the vulnerability management process

  • MICROWAVE CIRCUIT ANALYZERS ON A MULTI-PROBE MEASURING LINE. REVIEW OF SIGNAL PROCESSING METHODS, PROBLEMS AND PROSPECTS (REVIEW)

    А.А. L’vov, B. М. Kats, P. А. L’vov, V.P. Meschanov, К.А. Sayapin
    Abstract

    Further progress in microwave technology is inextricably linked with the creation of new precision
    automatic measuring systems. In our country, microwave circuit vector analyzers that can measure the
    amplitude and phase relationships of the S-parameters of the microwave networks under test are not
    mass-produced. The use of multi-port reflectometers (MPR) as measuring devices in automatic microwave
    circuit analyzers allows creating relatively cheap and high-precision devices for studying load parameters.
    The paper provides an overview of the works in which the MPR method is developed, when the latter
    can be represented by a multi-probe transmission line reflectometer (MTLR). The history of the development
    of measurement methods using traditional MPR is briefly described and it is shown that the main
    problem of their use is reflectometer calibration, which can be carried out accurately only using a set of
    precision calibration standards. MTLR, which is a special case of MPR, is studied in detail. It is shown
    that random measurement errors by the MTLR method are higher than those of a precisely calibrated MR.
    However, the MTLR has important advantages that are discussed in the paper. A strategy for increasing
    the measurement accuracy using the MTLR is described: 1) optimal methods for processing output signals
    from the MTLR probes using the maximum likelihood method are proposed; 2) methods for calibrating the MTLR sensors are studied in detail and it is shown that it can be calibrated using a set of inaccurately known
    loads with their parallel certification, therefore, systematic calibration errors are significantly reduced;
    3) methods for optimizing the MTLR design by arranging the probes inside the microwave path for measuring
    with maximum accuracy in narrow and wide frequency ranges are studied, and it is also shown how it is
    possible to measure with potentially achievable accuracy due to the proper choice of weighting coefficients in
    the MTLR probes. Random and systematic errors in measuring the complex reflection index of microwave
    loads, as well as uncertainties in measuring types A and B by the MTLR method are investigated, and references
    to relevant works are given. In conclusion, the possibilities of joint use of the MTLR and MPR methods
    are considered, a combined MPR is briefly described, which measures with an accuracy characteristic of a
    traditional MPR, but can be calibrated using a set of unknown loads, which is inherent in the MTLR method.
    Automatic network analyzer, multi-pole reflectometer, multi-probe measuring line, maximum likelihood
    method, error dispersion matrix, meter calibration.

SECTION IV. NANOTECHNOLOGY, ELECTRONICS AND RADIO ENGINEERING

  • RECTENNA MODEL BASED ON MOSFETS FOR MICROWAVE ENERGY HARVESTING AT ULTRA-LOW POWER LEVELS

    B.G. Konoplev
    Abstract

    For wireless and battery-free power supply of autonomous devices with low power consumption harvesting
    of radio frequency energy from the environment is increasingly used: energy from cellular stations,
    radio stations, microwave ovens, Wi-Fi, Bluetooth, etc. To convert the collected energy into a DC voltage,
    devices consisting of an antenna, a rectifier and an impedance matching circuit of the antenna and the rectifier,
    called rectennas, are used. The power density of the electromagnetic field can be very small: from hundreds
    of microwatts to tens of picowatts per cm2. Therefore, the task of developing rectennas capable of operating
    at ultra-low power levels is urgent. The parameters of components of the rectenna (antenna, impedance
    matching circuit, rectifier) are strongly interconnected, therefore, to obtain optimal characteristics, it is
    necessary to design the rectenna considering the mutual influence of all components and use appropriate
    models. The paper analyzes the features of the construction and development of a rectenna model based on
    MOSFETs for operation at ultra-low power levels. Expressions for estimating the output voltage of the
    recntenna are obtained, considering the basic parameters of the antenna, the rectifier/voltage multiplier and
    the impedance matching circuit. Calculations based on the obtained expressions and modeling are performed
    for a typical 90 nm CMOS technology. The possibility of constructing rectennas based on MOSFETs at ultralow
    power levels up to -50 dBm is shown. Recommendations are given on the choice of technological and
    design parameters of rectennas for harvesting microwave energy.

  • PCB SUBSTRATES CHARATERISATION USING PRINTED STRUCTURES

    М. М. Migalin, V. А. Obukhovets
    Abstract

    Growing user requirements for data exchange rates in telecommunication systems have resulted in
    the active adoption of mm-band wavelengths and the intensive development of broadband communication
    systems. Designing mm-wave microwave devices using CAD requires accurate frequency-dependent relative
    permittivity data for the used substrate to reduce the device design time. This paper focuses on determining
    the relative dielectric constant of the Rogers 3003G2 substrate in the mm-wavelength range. Both
    non-resonant and resonant methods were used to find the dielectric permittivity. The automation of the
    measurement data processing was achieved by using the developed script in MATLAB. The relative permittivity
    of the substrate in the band 1-42 GHz was determined by applying the phase difference method,
    using two microstrip lines of different lengths. SIW resonators with waveguide excitation were developed
    to avoid using a probe station with fragile probes for S-parameter measurements in the mm-length range.
    The relative permittivity of the studied substrate in the 60-170 GHz range was found using three prototype
    multi-mode SIW resonators. A set of single-mode SIW resonators with different waveguide excitation coupling
    was produced to avoid ambiguity in longitudinal mode number determination in multi-mode SIW
    resonators. Several loaded resonant frequencies were obtained by varying the length of SIW-resonators'
    excitation slots to calculate the unloaded resonant frequency used to find the relative dielectric permittivity
    of the substrate. Recommendations for developing SIW resonators for the determination of dielectric
    properties of the substrates are given in the conclusion section.

  • WIRELESS UAV CHARGING SYSTEM WITH BATTERY BALANCING FUNCTIONALITY

    V.V. Burlaka, S. V. Gulakov, А. Y. Golovin, D. S. Mironenko
    Abstract

    The issue of creating a wireless charging system for an on-board battery of an unmanned aerial vehicle
    (UAV) is considered, taking into account the need to balance the voltages of its elements. When designing
    the system, based on a brief overview of the principles of wireless energy transmission, the principle
    of using magnetically coupled circuits is taken as the most suitable in terms of its technical and economic
    properties. The aim of the work is to develop a circuit solution for a UAV's wireless battery charging
    system with the ability to balance voltages both during charging and during load operation. The use of
    such a system will improve the safety of battery operation and extend its service life by leveling the degree
    of wear (aging) of the elements. As a result of the research, a circuit was developed and an experimental
    sample of the specified wireless charging system was manufactured. When synthesizing the circuit, the
    task was to minimize the number of components in the power circuits in order to reduce the mass of the
    system and its cost. The maximum power of the experimental wireless charging system exceeds 100 Watts
    (25 V · 4 A) and is somewhat excessive for an on-board battery with a capacity of 1,500 mAh. Forced
    cooling of the receiving part is not required. The weight of the receiving part mounted on an unmanned
    aerial vehicle is 79 g (40 g is the receiving coil and 39 g is the electronics unit) and has reserves for reduction
    by reducing the cross–section of the receiving coil conductors, using a textolite with a lower
    thickness in the electronics unit, sealing the installation and using a two–sided arrangement of components.
    Laboratory tests have been carried out, confirming the operability of the proposed technical solutions,
    and the effectiveness of balancing during charging has been evaluated. In order to evaluate the
    effectiveness of the balancing system during the experiments, the output resistance of the receiver (U/I)
    was calculated relative to one of the elements of the on-board battery when the voltage on it changes.
    The result was 1.9 ohms with a charge current of 0.8 A (6S 1500 mAh battery).

  • INVESTIGATION OF THE INFLUENCE OF ANNEALING MODES OF THE GAAS(111) SURFACE ON THE CHARACTERISTICS OF NANOHOLES FORMED BY FOCUSED ION BEAMS AT VARIOUS EXPOSURE TIMES

    Е. А. Lakhina, N.Е. Chernenko, N. А. Shandyba, S.V. Balakirev, М.S. Solodovnik
    Abstract

    The paper presents the results of experimental studies of the processes of formation of holes by the
    method of focused ion beams on GaAs(111) substrates and their subsequent transformation during annealing
    in an ultrahigh vacuum chamber of molecular beam epitaxy in an arsenic flux and in its absence.
    It was found that at an ion beam exposure time of 1 ms, the processes of ion accumulation in the substrate
    prevail over the processes of the material sputtering, whereas at an exposure time of 5 ms, intensive sputtering
    of the substrate material occurs at the points of exposure to the ion beam with an increase in the
    depth of the etched areas with an increase in the number of ion beam passes. After annealing of substrates
    with ion beam-modified areas, the holes increase significantly in size as a result of local droplet etching
    processes. Studies showed that the hole size after annealing in the arsenic flux exceeds the hole size after
    annealing in the absence of an arsenic flux in almost the entire range of the number of ion beam passes.
    The dependences of the depth and lateral size of the holes on the number of ion beam passes are nonmonotonic,
    due to the competition of the processes of droplet etching and crystallization of ion beammodified
    areas in the arsenic flux. The results of experimental studies show that to obtain highly symmetric
    pyramidal holes with low surface density, it is required to create on the GaAs(111) surface an array of
    focused ion beam treatment points with an interval of 2 μm at an exposure time of 5 ms and a number of
    passes equal to 40. At the next stage, it is necessary to transform the ion beam processing points into pyramidal-
    shaped holes by annealing the substrate in a molecular beam epitaxy chamber at a temperature
    of 600°C and a time interval of 60 minutes. The technique proposed in this work, based on the combination
    of ion-beam surface treatment and molecular beam epitaxy, makes it possible to obtain nanoholes
    with the required symmetry, which can further serve as nucleation centers for InAs quantum dots with the
    desired properties.

  • TECHNOLOGICAL AND DIELECTRIC PROPERTIES OF RESINS FOR DLP 3D PRINTING WITH ADDITIVES OF AL2O3 AND CTS-19 POWDERS

    А.V. Yudin, Y.I. Yurasov, P.S. Plyaka, М.I. Tolstunov, О.А. Belyak
    Abstract

    Expanding the range of materials available for processing by additive methods is of great interest to
    industry. Technologies such as 3D polymer printing significantly expand the boundaries of design capabilities,
    allowing a transition to next-generation devices. In view of the gradual implementation of such approaches
    in practice, a new impetus for development has been given to the direction of metamaterials -
    volumetric structures whose geometry allows for more complete use of the properties of the base material.
    In particular, ceramics, common in modern electronics, can be introduced into a polymer molded by an
    additive method as a functional additive. Subsequent heat treatment of such compositions allows obtaining
    a macrostructured ceramic-polymer or purely ceramic framework with unique piezo- or dielectric properties.
    However, additive particles can significantly change the technological properties of the base material,
    which must be taken into account. At the same time, isolating the empirical features characterizing this
    dynamics is a non-trivial task. Thus, in publications on UV-curable composites, the viscosity criterion of
    the composition is recognized as the leading feature. At the same time, optical permittivity, which determines
    the required equipment power, is not considered properly. In this regard, the presented work studies
    the viscosity, dielectric, optical and temperature properties of composites based on UV-curable resin for
    DLP 3D printing, containing additives of 5 vol. % Al2O3 and CTS-19 powders. A method for qualitative
    express analysis of the technological suitability of the composition based on the Scotch test is presented. It
    is shown that the viscosity of the composition is less significant in comparison with its optical permittivity
    in the UV range. The considered compositions have temperature stability up to 300 ⁰С. The introduction of
    powder additives makes it possible to increase the dielectric permittivity ε'/ε0 by 2.5 times and reduce
    dielectric losses in the material when heated above 110 ⁰C. It is shown that composites containing aluminum
    oxide have potential for use in electronics.