Deep learning in structural optimization

In this article we will outline the possibilities of artificial intelligence in the optimization of structures, in particular, the use of deep learning. Deep learning (DL) is a subset of machine learning (ML), which in turn is a subset of artificial intelligence . While artificial intelligence began in the 1950s, machine learning emerged in the 1980s, while deep learning was born in the 21st century, starting in 2010, with the emergence of large supercomputers and the increase in accessible data. As a curiosity, one of the great milestones of DL occurred in 2012, when Google was able to recognize a cat among more than 10 million YouTube videos, using 16,000 computers. Now much less means would be necessary.

In any of these three cases, we are talking about computer systems capable of analyzing large amounts of data (big data), identifying patterns and trends and, therefore, predicting automatically, quickly and accurately. We talked about artificial intelligence and its applicability to civil engineering in a previous article.

If we think of structural calculation, we use models, more or less sophisticated, that allow, if the actions are known with sufficient precision, to find out the stresses to which each of the elements into which we have divided a structure are subjected. These stresses are used to identify a series of limit states, which are a set of potentially dangerous situations for the structure, and to compare whether the structural capacity of the analyzed element, depending on the geometric properties and its constituent materials, exceeds the ultimate value of the stress to which, under certain probability, the analyzed structural element can be subjected.

These traditional methods use from hypotheses of elasticity and linear behavior, to other more complex models with plastic or nonlinear behavior. The finite element method (FEM) and the stiffness matrix method are often used, with varying degrees of sophistication. Ultimately, in certain cases, computers are often used to solve very complex partial differential equations in an approximate way, which are common in structural engineering, but also in other fields of engineering and physics. For these calculation systems to be accurate, it is necessary to feed the models with data on materials, boundary conditions, actions, etc., as real as possible. For this purpose, these models are tested and calibrated in real laboratory tests (Friswell and Mottershead, 1995). In a way, we are feeding information back to the model, and therefore it “learns”.

If we analyze well what we are doing, we are using a model, more or less complicated, to predict how the structure is going to behave. Well, if we had a sufficient amount of data from laboratory and real cases, an intelligent system would extract information and would be able to predict the final result. While artificial intelligence should be fed with a huge amount of data (big data), the finite element method requires less raw information (smart data), because there has been a very thorough and rigorous previous work, to try to understand the underlying phenomenon and model it properly. But, in short, they are two different procedures that lead us to the same objective: to design safe structures. Whether these structures are optimal from any point of view (economics, sustainability, etc.) is another matter.

The optimization of structures is a scientific field where intensive work has been carried out in recent decades. Due to the fact that real problems require a large number of variables, the exact resolution of the associated optimization problem is unaffordable. These are NP-hard problems, of high computational complexity, which require metaheuristics to reach satisfactory solutions in reasonable computational times.

One of the characteristics of optimization using metaheuristics is the high number of iterations in the solution space, which allows the generation of an immense amount of data for the set of structures visited. This is the ideal field for artificial intelligence, as it allows extracting information to accelerate and refine the search for the optimal solution. One such example is our work (García-Segura et al., 2017) on multi-objective optimization of box bridges, where a neural network learned from the intermediate data of the search and then predicted with extraordinary accuracy the calculation of the bridge, without the need to calculate it. This allowed the final computation time to be reduced considerably.

read also : full stack deep learning

However, this type of application is very simple, as it has only reduced the computation time (each complete check of a bridge by the finite element method is much slower than a prediction with a neural network). It is now a matter of going a step further. It is about metaheuristics being able to learn from the data collected using artificial intelligence to be much more effective, and not just faster.

Both artificial intelligence and machine learning are not a new science. The problem is that their applications were limited by the lack of data and technologies to process them quickly and efficiently. Today a qualitative leap has been made and DL can be used, which as we have already said is a part of ML, but which uses more sophisticated algorithms, built on the principle of neural networks. Let’s say that DL (neural networks) uses algorithms other than ML (regression algorithms, decision trees, among others). In both cases, the algorithms can learn in a supervised or unsupervised way. In the unsupervised ones, the input data are provided, not the output data. The reason it is called deep learning refers to deep neural networks, which use a large number of layers in the network, say, for example, 1000 layers. In fact, DL is also often referred to as “deep neural networks”. This technique of artificial neural networks is one of the most common DL techniques.

One of the neural networks used in DL are convolutional neural networks, which is a variation of the multilayer perceptron, but where its application is on two-dimensional arrays, and therefore, they are very effective in computer vision tasks, such as image classification and segmentation. In engineering, for example, it can be used for structural condition monitoring, e.g., for deterioration analysis. It would be necessary to imagine how far one could go recording in digital images the breakage of concrete structures in the laboratory and to see the predictive capacity of this type of tools if they had enough data. Everything will happen. Here is a typical traditional application (Antoni Cladera, from the University of the Balearic Islands), where the model of a beam breaking in bending is explained on the blackboard and then the beam is broken in the laboratory. How much data are we losing in the recording! A very recent example of the use of DL and Digital Image Correlation (DIC) applied to specimen breaks in the laboratory is the work of Gulgec et al. (2020).

read also : IBM Spectrum Protect : Backup and Store Data for Enterprise Networks

However, here we are interested to dwell on the exploration of the specific integration of DL into metaheuristics in order to improve the quality of solutions or convergence times when it comes to optimizing structures. An example of this novel path in research is the applicability of algorithms that hybridize DL and metaheuristics. We have already published some papers in this sense applied to the optimization of buttress walls (Yepes et al., 2020; García et al., 2020a, 2020b). In addition, we have proposed as guest editor, a special issue in the journal Mathematics (indexed in the first decile of the JCR) called “Deep learning and hybrid-metaheuristics: novel engineering applications”.

Leave a Reply

Your email address will not be published.