Neural networks for topology optimization (2024)

Ivan Sosnoviki.sosnovik@uva.nlIvan Oseledetsi.oseledets@skoltech.ruUniversity of Amsterdam, Amsterdam, The NetherlandsSkolkovo Institute of Science and Technology, Moscow, RussiaInstitute of Numerical Mathematics RAS, Moscow, Russia.

Abstract

In this research, we propose a deep learning based approach for speeding up the topology optimization methods. The problem we seek to solve is the layout problem. The main novelty of this work is to state the problem as an image segmentation task. We leverage the power of deep learning methods as the efficient pixel-wise image labeling technique to perform the topology optimization. We introduce convolutional encoder-decoder architecture and the overall approach of solving the above-described problem with high performance. The conducted experiments demonstrate the significant acceleration of the optimization process. The proposed approach has excellent generalization properties. We demonstrate the ability of the application of the proposed model to other problems. The successful results, as well as the drawbacks of the current method, are discussed.

keywords:

deep learning, topology optimization, image segmentation

1 Introduction

Topology optimization solves the layout problem with the following formulation: how to distribute the material inside a design domain such that the obtained structure has optimal properties and satisfies the prescribed constraints? The most challenging formulation of the problem requires the solution to be binary i.e. it should state whether there is a material or a void for each of the parts of the design domain. One of the common examples of such an optimization is the minimization of elastic strain energy of a body for a given total weight and boundary conditions. Initiated by the demands of automotive and aerospace industry in the 20t​hsuperscript20π‘‘β„Ž20^{th} century, topology optimization has spread its application to a wide range of other disciplines: e.g. fluids, acoustics, electromagnetics, optics and combinations thereof [1].

All modern approaches for topology optimization used in commercial and academic software are based on finite element methods. SIMP (Simplified Isotropic Material with Penalization), which was introduced in 1989 [2], is currently a widely spread simple and efficient technique. It proposes to use penalization of the intermediate values of density of the material, which improves the convergence of the solution to binary. Topology optimization problem could be solved by using BESO (Bi-directional evolutionary structural optimization) [3, 4] as an alternative. The key idea of this method is to remove the material where the stress is the lowest and add material where the stress is higher. The more detailed review is discussed in Section 2.

For all of the above-described methods, the process of optimization could be roughly divided into two stages: general redistribution of the material and the refinement. During the first one, the material layout varies a lot from iteration to iteration. While during the second stage the material distribution converges to the final result. The global structure remains unchanged and only local alteration could be observed.

In this paper, we propose a deep learning based approach for speeding up the most time-consuming part of a traditional topology optimization methods. The main novelty of this work is to state the problem as an image segmentation task. We leverage the power of deep learning methods as an efficient pixel-wise image labeling technique to accelerate modern topology optimization solvers. The key features of our approach are the following:

  • 1.

    acceleration of optimization process;

  • 2.

    excellent generalization properties;

  • 3.

    absolutely scalable techniques;

2 Topology Optimization Problem

Current research is devoted to topology optimization of mechanical structures. Consider a design domain Ξ©:{Ο‰j}j=1N:Ξ©superscriptsubscriptsubscriptπœ”π‘—π‘—1𝑁\Omega:\{\omega_{j}\}_{j=1}^{N}, filled witha linear isotropic elastic material and discretized with square finite elements. The material distribution is described by the binary density variable xjsubscriptπ‘₯𝑗x_{j} that represents either absence (0) or presence (1) of the material at each point of the design domain. Therefore, the problem, that we seek to solve, can be written in mathematical form as:

{min𝒙c​(𝒖​(𝒙),𝒙)=βˆ‘j=1NEj​(xj)​𝒖jTβ€‹π’ŒπŸŽβ€‹π’–js.t.V​(𝒙)/V0=f0𝑲​𝑼=𝑭xj∈{0;1},j=1​…​Ncasessubscript𝒙𝑐𝒖𝒙𝒙superscriptsubscript𝑗1𝑁subscript𝐸𝑗subscriptπ‘₯𝑗subscriptsuperscript𝒖𝑇𝑗subscriptπ’Œ0subscript𝒖𝑗s.t.𝑉𝒙subscript𝑉0subscript𝑓0otherwise𝑲𝑼𝑭otherwiseformulae-sequencesubscriptπ‘₯𝑗01𝑗1…𝑁\begin{split}\begin{cases}\min\limits_{\bm{x}}&c(\bm{u}(\bm{x}),\bm{x})=\sum\limits_{j=1}^{N}E_{j}(x_{j})\bm{u}^{T}_{j}\bm{k_{0}}\bm{u}_{j}\\\hfill\text{s.t.}&V(\bm{x})/V_{0}=f_{0}\\\hfill&\bm{KU}=\bm{F}\\\hfill&x_{j}\in\{0;1\},\quad j=1\dots N\end{cases}\end{split}(1)

where c𝑐c is a compliance, 𝒖𝒋subscript𝒖𝒋\bm{u_{j}} is the element displacement vector, π’ŒπŸŽsubscriptπ’Œ0\bm{k_{0}} is the element stiffness matrix for an element with unit Young’s modulus, 𝑼𝑼\bm{U} and 𝑭𝑭\bm{F} are the global displacement and force vectors, respectively and 𝑲𝑲\bm{K} is the global stiffness matrix. V​(𝒙)𝑉𝒙V(\bm{x}) and V0subscript𝑉0V_{0} are the material volume and design domain volume, respectively. f0subscript𝑓0f_{0} is the prescribed volume fraction.

The discrete nature of the problem makes it difficult to solve. Therefore, the last constraint in (1) is replaced with the following one: xj∈[0;1],j=1​…​Nformulae-sequencesubscriptπ‘₯𝑗01𝑗1…𝑁x_{j}\in[0;1],\;j=1\dots N. The most common method for topology optimization problem with continuous design variables is so-called SIMP or power-law approach [2, 5]. This is a gradient-based iterative method with the penalization of non-binary solutions, which is achieved by choosing Young's modulus of a simple but very efficient form:

Ej​(xj)=Emin+xjp​(E0βˆ’Emin)subscript𝐸𝑗subscriptπ‘₯𝑗subscript𝐸minsubscriptsuperscriptπ‘₯𝑝𝑗subscript𝐸0subscript𝐸minE_{j}(x_{j})=E_{\text{min}}+x^{p}_{j}(E_{0}-E_{\text{min}})(2)

The exact implementation of SIMP algorithm is out of the scope of the current paper. The updating schemes, as well as different heuristics, can be found in excellent papers [6, 7, 8, 9, 10]. The topology optimization code in Matlab is described in details in [11, 12] and the Python implementation of SIMP algorithm is represented in [13].

Neural networks for topology optimization (1)
Neural networks for topology optimization (2)
Neural networks for topology optimization (3)
Neural networks for topology optimization (4)
Neural networks for topology optimization (5)

Standard half MBB-Beam problem is used to illustrate the process of topology optimization. The design domain, constraints, and loads are represented in Figure 1. The optimization of this problem is demonstrated in Figure 2. During the initial iterations, the general redistribution of the material inside of the design domain is performed. The rest of the optimization process includes the filtering of the pixels: the densities with intermediate values converge to binary values and the silhouette of the obtained structure remains almost unchanged.

3 Learning Topology Optimization

As it was illustrated in Section 2, it is enough for the solver to perform a few number N0subscript𝑁0N_{0} of iterations to obtain the preliminary view of a structure. The fraction of non-binary densities could be close to 1, however, the global layout pattern is close to the final one. The obtained image I𝐼I could be interpreted as a blurred image of a final structure, or an image distorted by other factors. The thing is that there are just two types of objects on this image: the material and the void. The image Iβˆ—superscript𝐼I^{*}, obtained as a result of topology optimization does not contain intermediate values and, therefore, could be interpreted as the mask of image I𝐼I. According to this notation, starting from iteration N0subscript𝑁0N_{0} the process of optimization Iβ†’Iβˆ—β†’πΌsuperscript𝐼I\rightarrow I^{*} mimics the process of image segmentation for two classes or foreground-background segmentation.

We propose the following pipeline for topology optimization: use SIMP method to perform the initial iterations and get the distribution with non-binary densities; use the neural network to perform the segmentation of the obtained image and converge the distribution to {0,1}01\{0,1\} solution.

3.1 Architecture

Neural networks for topology optimization (6)

Here we introduce the Neural Network for Topology Optimization β€” deep fully-convolutional neural network aimed to perform the convergence of densities during the topology optimization process. The input of the model is two grayscale images (or a two-channel image). The first one is the density distribution Xnsubscript𝑋𝑛X_{n} inside of the design domain which was obtained after the last performed iteration of topology optimization solver. The second input is the last performed update (gradient) of the densities δ​X=Xnβˆ’Xnβˆ’1𝛿𝑋subscript𝑋𝑛subscript𝑋𝑛1\delta X=X_{n}-X_{n-1}, i.e. the difference between the densities after the n𝑛n-th iteration and nβˆ’1𝑛1n-1-th iteration. The output of the proposed model is a grayscale image of the same resolution as an input, which represents the predicted final structure. The architecture of our model mimics the common for the image segmentation hourglass shape. The proposed model has an encoder network and a corresponding decoder network, followed by a final pixel-wise classification layer. The architecture is illustrated in Figure 3.

The encoder network consists of 6 convolutional layers. Each layer has kernels of size 3Γ—3333\times 3 and is followed by ReLU nonlinearity. The first two layers have 16 convolutional kernels. This block is followed by the pooling of the maximal element from the window of size 2Γ—2222\times 2. The next two layers have 32 kernels and are also followed by MaxPooling layer. The last block consists of 2 layers with 64 kernels each.

The decoder network copies the architecture of the encoder part and reverses it. The MaxPooling layers are replaced with Upsampling layers followed by the concatenation with features from the corresponding low-level layer as it is performed in U-Net [14]. The pooling operation introduces the invariance of the subsequent network to small translations of the input. The concatenation of features from different layers allows one to benefit from the use of both the raw low-level representation and significantly encoded parametrization from the higher levels. The decoder is followed by the Convolutional layer with 1 kernel and sigmoid activation function. We included 2 Dropout layers [15] as the regularization for our network.

The width and height of the input image could vary, however, they must be divisible by 4 in order to guarantee the coherence of the shapes of tensors in the computational graph. The proposed neural network has just 192,113 parameters.

3.2 Dataset

To train the above-described model, we need example solutions to the System 1. The collection of a large dataset from the real-life examples is difficult or even impossible. Therefore, we use synthetic data generated by using Topy [13] β€” an open source solver for 2D and 3D topology optimization, based on SIMP approach.

To generate the dataset we sampled the pseudo-random problem formulations and performed 100 iterations of standard SIMP method. Each problem is defined by the constraints and the loads. The strategy of sampling is the following:

  • 1.

    The number of nodes with fixed xπ‘₯x and y𝑦y translations and the number of loads is sampled from the Poisson distribution:

    Nx∼P​(Ξ»=2),similar-tosubscript𝑁π‘₯π‘ƒπœ†2\displaystyle N_{x}\sim P(\lambda=2),
    Ny,NL∼P​(Ξ»=1)similar-tosubscript𝑁𝑦subscript𝑁Lπ‘ƒπœ†1\displaystyle N_{y},N_{\text{L}}\sim P(\lambda=1)
  • 2.

    The nodes for each of the above-described constraints are sampled from the distribution defined on the grid. The probability of choosing the boundary node is 100 times higher than that for an inner node.

  • 3.

    The load values are chosen as βˆ’11-1.

  • 4.

    The volume fraction is sampled from the Normal distributionf0βˆΌπ’©β€‹(ΞΌ=0.5,Οƒ=0.1)similar-tosubscript𝑓0𝒩formulae-sequenceπœ‡0.5𝜎0.1f_{0}\sim\mathcal{N}(\mu=0.5,\sigma=0.1)

The obtained dataset111The dataset and the related code is available at https://github.com/ISosnovik/top has 10,000 objects. Each object is a tensor of shape 100Γ—40Γ—401004040100\times 40\times 40: 100 iterations of the optimization process for the problem defined on a regular 40Γ—40404040\times 40 grid.

3.3 Training

We used dataset, described in Section 3.2, to train our model. During the training process we β€œstopped” SIMP solver after kπ‘˜k iterations and used the obtained design variables as an input for our model. The input images were augmented with transformations from group D4: horizontal and vertical flips and rotation by 90 degrees. kπ‘˜k was sampled from some certain distribution β„±β„±\mathcal{F}. Poisson distribution P​(Ξ»)π‘ƒπœ†P(\lambda) and discrete uniform distribution U​[1,100]π‘ˆ1100U[1,100] are of interest to us. For training the network we used the objective function of the following form:

β„’=β„’conf​(Xtrue,Xpred)+β​ℒvol​(Xtrue,Xpred)β„’subscriptβ„’confsubscript𝑋truesubscript𝑋pred𝛽subscriptβ„’volsubscript𝑋truesubscript𝑋pred\begin{split}\mathcal{L}=\mathcal{L}_{\text{conf}}(X_{\text{true}},X_{\text{pred}})+\beta\mathcal{L}_{\text{vol}}(X_{\text{true}},X_{\text{pred}})\end{split}(3)

where the confidence loss is a binary cross-entropy:

β„’conf​(Xtrue,Xpred)=βˆ’1N​Mβ€‹βˆ‘i=1Nβˆ‘j=1M[Xtruei​j​log⁑(Xpredi​j)+(1βˆ’Xtruei​j)​log⁑(1βˆ’Xpredi​j)]subscriptβ„’confsubscript𝑋truesubscript𝑋pred1𝑁𝑀superscriptsubscript𝑖1𝑁superscriptsubscript𝑗1𝑀delimited-[]superscriptsubscript𝑋true𝑖𝑗superscriptsubscript𝑋pred𝑖𝑗1superscriptsubscript𝑋true𝑖𝑗1superscriptsubscript𝑋pred𝑖𝑗\mathcal{L}_{\text{conf}}(X_{\text{true}},X_{\text{pred}})=-\frac{1}{NM}\sum\limits_{i=1}^{N}\sum\limits_{j=1}^{M}\Big{[}X_{\text{true}}^{ij}\log(X_{\text{pred}}^{ij})+(1-X_{\text{true}}^{ij})\log(1-X_{\text{pred}}^{ij})\Big{]}(4)

where NΓ—M𝑁𝑀N\times M is the resolution of the image. The second summand in Equation (3) represents the volume fraction constraint:

β„’vol​(Xtrue,Xpred)=(XΒ―predβˆ’XΒ―true)2subscriptβ„’volsubscript𝑋truesubscript𝑋predsuperscriptsubscript¯𝑋predsubscript¯𝑋true2\mathcal{L}_{\text{vol}}(X_{\text{true}},X_{\text{pred}})=(\bar{X}_{\text{pred}}-\bar{X}_{\text{true}})^{2}(5)

We used ADAM [16] optimizer with default parameters. We halved the learning rate once during the training process. All code is written in Python222The implementation is available at https://github.com/ISosnovik/nn4topopt. For neural networks, we used Keras [17] with TensorFlow [18] backend. NVIDIA Tesla K80 was used for deep learning computations. The training of a neural network from scratch took about 80-90 minutes.

4 Results

Neural networks for topology optimization (7)

The goal of our experiments is to demonstrate that the proposed model and the overall pipeline are useful for solving topology optimization problems. We compare the performance of our approach with standard SIMP solver [13] in terms of the accuracy of the obtained structure and the average time consumption.

We report two metrics from common image segmentation evaluation: Binary Accuracy and Intersection over Union (IoU). Let nl,l=0,1formulae-sequencesubscript𝑛𝑙𝑙01n_{l},\;l=0,1 be the total number of pixels of class l𝑙l. The Ο‰t​p,t,p=0,1formulae-sequencesubscriptπœ”π‘‘π‘π‘‘π‘01\omega_{tp},\;t,p=0,1 is a total number of pixels of class t𝑑t predicted to belong to class p𝑝p. Therefore:

Bin. Acc.=Ο‰00+Ο‰11n0+n1;IoU=12​[Ο‰00n0+Ο‰10+Ο‰11n1+Ο‰01]formulae-sequenceBin. Acc.subscriptπœ”00subscriptπœ”11subscript𝑛0subscript𝑛1IoU12delimited-[]subscriptπœ”00subscript𝑛0subscriptπœ”10subscriptπœ”11subscript𝑛1subscriptπœ”01\text{Bin. Acc.}=\frac{\omega_{00}+\omega_{11}}{n_{0}+n_{1}};\quad\text{IoU}=\frac{1}{2}\Big{[}\frac{\omega_{00}}{n_{0}+\omega_{10}}+\frac{\omega_{11}}{n_{1}+\omega_{01}}\Big{]}(6)

We examine 4 neural networks with the same architecture but trained with different policies. The number of iterations after which we β€œstopped” SIMP algorithm was sampled from different distributions. We trained one neural network by choosing discrete uniform distribution U​[1,100]π‘ˆ1100U[1,100] and another three models were trained with Poisson distribution P​(Ξ»)π‘ƒπœ†P(\lambda) with Ξ»=5,10,30πœ†51030\lambda=5,10,30.

4.1 Accuracy and performance

We conducted several experiments to illustrate the results of the application of the proposed pipeline and the exact model to mechanical problems. Figure 4 demonstrates that our neural network restores the final structure while being used even after 5 iterations. The output of the model is close to that of SIMP algorithm. The overall topology of the structure is the same. Furthermore, the time consumption of the proposed method, in this case, is almost 20 times smaller.

Neural networks trained with different policies produce close results: models preserve the final structure up to some rare pixel-wise changes. However, the accuracy of these models depends on the number of the initial iterations performed by SIMP algorithm. Tables 1, 2 summarize the results obtained in the series of experiments. The trained models demonstrate the sufficiently more accurate results comparing to the thresholding applied after the same number of iterations of SIMP method. Some models benefit when they are applied after 5-10 iterations, while others demonstrate better result in the middle or at the end of the process. The proposed pipeline could significantly accelerate the overall algorithm with minimal reduction in accuracy, especially when CNN is used at the beginning of the optimization process.

The neural network which used discrete uniform distribution during the training process does not demonstrate the highest binary accuracy and IoU comparing to other models till the latest iterations. However, this model allows one to outperform the SIMP algorithm with thresholding throughout the optimization process.

Iteration
Method51015203040506080
Thresholding92.995.496.597.197.798.198.498.698.9
CNN P​(5)𝑃5P(5)95.897.397.797.998.298.498.598.698.7
CNN P​(10)𝑃10P(10)95.497.698.198.498.798.999.099.099.0
CNN P​(30)𝑃30P(30)92.796.397.898.599.099.299.499.599.6
CNN U​[1,100]π‘ˆ1100U[1,100]94.796.897.798.298.799.099.399.499.6
Iteration
Method51015203040506080
Thresholding86.891.293.394.395.696.396.897.397.9
CNN P​(5)𝑃5P(5)92.094.795.496.096.596.997.197.397.4
CNN P​(10)𝑃10P(10)91.195.396.496.997.497.898.098.098.1
CNN P​(30)𝑃30P(30)86.492.995.797.098.198.598.899.099.2
CNN U​[1,100]π‘ˆ1100U[1,100]90.093.995.596.497.598.198.698.899.2

4.2 Transferability

This research is dedicated to the application of neural networks to the topology optimization of minimal compliance problems. Nevertheless, the proposed model does not rely on any prior knowledge of the nature of the problem. Despite the fact that we used mechanical dataset during the training, other types of problems from topology optimization framework could be solved by using the proposed pipeline. To examine the generalization properties of our model, we generated the small dataset of heat conduction problems defined on 40Γ—40404040\times 40 regular grid. The exact solution and the intermediate densities for the problems were obtained in absolutely the same way as it was described in Section 3.

The conducted experiments are summarized in Table 3, 4. During the initial part of the optimization process, the results of the pre-trained CNNs are more accurate than this of thresholding. Our model approximates the mapping to the final structure precisely when the training dataset and the validation dataset are from the same distribution. However, it mimics updates of SIMP method during the initial iterations even when CNN is applied to another dataset. Therefore, this pipeline could be useful for the fast prediction of the rough structure in various topology optimization problems.

The neural network described in Section 3 is fully-convolutional, i.e. it consists of Convolutional, Pooling, Upsampling and Dropout layers. The architecture itself does not have any constraints on the size of the input data. In this experiment, we checked the scalable properties of our method. The model we examined had been trained on the original dataset with square images of size 40Γ—40404040\times 40. Figure 5 visualizes the result of the application of CNN to the problems defined on grids with different resolution. Here we can see that changes in the aspect ratio and reasonable changes in the resolution of the input data do not affect the accuracy of the model. Pre-trained neural network successfully reconstructs the final structure for a given problem. Significant changes of the size of the input data require additional training of the model because the typical size of a common patterns changes with the increase of the resolution of an image. Nevertheless, demonstrated cases did not require one to tune neural network and allowed to transfer model from one resolution to another.

Iteration
Method51015203040506080
Thresholding97.598.498.899.199.499.699.799.899.9
CNN P​(5)𝑃5P(5)98.198.799.099.299.499.599.699.799.7
CNN P​(10)𝑃10P(10)98.198.899.099.299.499.599.699.799.8
CNN P​(30)𝑃30P(30)97.399.099.299.499.599.699.799.799.8
CNN U​[1,100]π‘ˆ1100U[1,100]97.898.899.199.399.599.699.799.799.8
Iteration
Method51015203040506080
Thresholding95.196.897.698.198.899.299.499.699.9
CNN P​(5)𝑃5P(5)96.297.598.098.498.899.099.299.399.5
CNN P​(10)𝑃10P(10)96.397.698.198.498.999.199.399.499.5
CNN P​(30)𝑃30P(30)94.898.098.598.799.099.299.399.499.5
CNN U​[1,100]π‘ˆ1100U[1,100]95.797.798.298.698.999.299.399.499.6
Neural networks for topology optimization (8)

5 Related work

The current research is supposed to be the first one which utilizes deep learning approach for the topology optimization problem. It is inspired by the recent successful application of deep learning to problems in computational physics. Greff at al. [19] used the fully-connected neural network as a mapping function from the nano-material configuration and the input voltage to the output current. The adaptation of restricted Boltzmann machine for solving the Quantum Many-Body Problem was demonstrated in paper [20]. Mills et al. [21] used the machinery of deep learning to learn the mapping between potential and energy, bypassing the need to numerically solve the SchrΓΆdinger equation and the need for computing wave functions. Tompson et al. [22] and Kiwon et al. [23] accelerated the process of modeling of liquids by the application of neural networks. The paper [24] demonstrates how a deep neural network trained on quantum mechanical density functional theory calculations can learn an accurate and transferable potential for organic molecules. The cutting-edge research [25] shows how generative adversarial networks could be used for simulating 3D high-energy particle showers in multi-layer electromagnetic calorimeters.

6 Conclusion

In this paper, we proposed a neural network as an effective tool for the acceleration of topology optimization process. Out model learned the mapping from the intermediate result of the iterative method to the final structure of the design domain. It allowed us to stop SIMP method earlier and significantly decrease the total time consumption.

We demonstrated that the model trained on the dataset of minimal compliance problems could produce the rough approximation of the solution for other types of topology optimization problems. Various experiments showed that the proposed neural network transfers successfully from the dataset with a small resolution to the problems defined on the grids with better resolution.

References

  • [1]M.P. BendsΓΈe, E.Lund, N.Olhoff, O.Sigmund, Topologyoptimization-broadening the areas of application, Control and Cybernetics 34(2005) 7–35.
  • [2]M.P. BendsΓΈe, Optimal shape design as a material distribution problem,Structural and multidisciplinary optimization 1(4) (1989) 193–202.
  • [3]C.Mattheck, S.Burkhardt, A new method of structural shape optimization basedon biological growth, International Journal of Fatigue 12(3) (1990)185–190.
  • [4]Y.M. Xie, G.P. Steven, A simple evolutionary procedure for structuraloptimization, Computers & structures 49(5) (1993) 885–896.
  • [5]H.Mlejnek, Some aspects of the genesis of structures, Structural andMultidisciplinary Optimization 5(1) (1992) 64–69.
  • [6]M.P. BendsΓΈe, Optimization of structural topology, shape, and material,Vol. 414, Springer, 1995.
  • [7]O.Sigmund, On the design of compliant mechanisms using topology optimization,Journal of Structural Mechanics 25(4) (1997) 493–524.
  • [8]B.Bourdin, Filters in topology optimization, International Journal forNumerical Methods in Engineering 50(9) (2001) 2143–2158.
  • [9]K.Svanberg, H.SvΓ€rd, Density filters for topology optimization based onthe Pythagorean means, Structural and Multidisciplinary Optimization 48(5)(2013) 859–875.
  • [10]A.A. Groenwold, L.Etman, A simple heuristic for gray-scale suppression inoptimality criterion-based topology optimization, Structural andMultidisciplinary Optimization 39(2) (2009) 217–225.
  • [11]O.Sigmund, A 99 line topology optimization code written in Matlab,Structural and multidisciplinary optimization 21(2) (2001) 120–127.
  • [12]E.Andreassen, A.Clausen, M.Schevenels, B.S. Lazarov, O.Sigmund, Efficienttopology optimization in MATLAB using 88 lines of code, Structural andMultidisciplinary Optimization 43(1) (2011) 1–16.
  • [13]W.Hunter, etal., Topy - topology optimization with python,https://github.com/williamhunter/topy (2017).
  • [14]O.Ronneberger, P.Fischer, T.Brox, U-net: Convolutional networks forbiomedical image segmentation, in: International Conference on Medical ImageComputing and Computer-Assisted Intervention, Springer, 2015, pp. 234–241.
  • [15]G.E. Hinton, N.Srivastava, A.Krizhevsky, I.Sutskever, R.R. Salakhutdinov,Improving neural networks by preventing co-adaptation of feature detectors,arXiv preprint arXiv:1207.0580.
  • [16]D.Kingma, J.Ba, Adam: A method for stochastic optimization, arXiv preprintarXiv:1412.6980.
  • [17]F.Chollet, etal., Keras, https://github.com/fchollet/keras (2015).
  • [18]M.Abadi, A.Agarwal, P.Barham, E.Brevdo, Z.Chen, C.Citro, G.S. Corrado,A.Davis, J.Dean, M.Devin, etal., Tensorflow: Large-scale machine learningon heterogeneous distributed systems, arXiv preprint arXiv:1603.04467.
  • [19]K.Greff, R.van Damme, J.Koutnik, H.Broersma, J.Mikhal, C.Lawrence,W.vander Wiel, J.Schmidhuber, Using neural networks to predict thefunctionality of reconfigurable nano-material networks.
  • [20]G.Carleo, M.Troyer, Solving the quantum many-body problem with artificialneural networks, Science 355(6325) (2017) 602–606.
  • [21]K.Mills, M.Spanner, I.Tamblyn, Deep learning and the SchrΓΆdingerequation, arXiv preprint arXiv:1702.01361.
  • [22]J.Tompson, K.Schlachter, P.Sprechmann, K.Perlin, Accelerating Eulerianfluid simulation with convolutional networks, arXiv preprintarXiv:1607.03597.
  • [23]K.Um, X.Hu, N.Thuerey, Liquid splash modeling with neural networks, arXivpreprint arXiv:1704.04456.
  • [24]J.S. Smith, O.Isayev, A.E. Roitberg, Ani-1: an extensible neural networkpotential with DFT accuracy at force field computational cost, ChemicalScience 8(4) (2017) 3192–3203.
  • [25]M.Paganini, L.deOliveira, B.Nachman, Calogan: Simulating 3D high energyparticle showers in multi-layer electromagnetic calorimeters with generativeadversarial networks, arXiv preprint arXiv:1705.02355.

Appendix A Dataset

Neural networks for topology optimization (9)

Appendix B Results

Neural networks for topology optimization (10)
Neural networks for topology optimization (2024)
Top Articles
The 5 Best Video Content Management Systems (CMS) In 2024
'Ted Lasso' Star Hannah Waddingham Joins 'Tom Jones' For Masterpiece
Top 11 Best Bloxburg House Ideas in Roblox - NeuralGamer
Danielle Moodie-Mills Net Worth
Craigslist Mpls Mn Apartments
Jesus Calling December 1 2022
Mustangps.instructure
J Prince Steps Over Takeoff
Oppenheimer & Co. Inc. Buys Shares of 798,472 AST SpaceMobile, Inc. (NASDAQ:ASTS)
Walgreens On Nacogdoches And O'connor
Who called you from 6466062860 (+16466062860) ?
Used Drum Kits Ebay
Bnsf.com/Workforce Hub
Les Rainwater Auto Sales
Yakimacraigslist
Labby Memorial Funeral Homes Leesville Obituaries
Energy Healing Conference Utah
Traveling Merchants Tack Diablo 4
The Weather Channel Local Weather Forecast
Craigslist Lewes Delaware
Somewhere In Queens Showtimes Near The Maple Theater
Best Nail Salons Open Near Me
Highmark Wholecare Otc Store
Form F-1 - Registration statement for certain foreign private issuers
Yonkers Results For Tonight
Play It Again Sports Norman Photos
Wiseloan Login
Sand Dollar Restaurant Anna Maria Island
Powerschool Mcvsd
Tuw Academic Calendar
Rek Funerals
Spectrum Outage in Queens, New York
Is Henry Dicarlo Leaving Ktla
Wolfwalkers 123Movies
Orange Park Dog Racing Results
Elanco Rebates.com 2022
Pfcu Chestnut Street
Boggle BrainBusters: Find 7 States | BOOMER Magazine
Bitchinbubba Face
Blasphemous Painting Puzzle
Final Jeopardy July 25 2023
Restored Republic June 6 2023
Ucsc Sip 2023 College Confidential
Gopher Hockey Forum
The Average Amount of Calories in a Poke Bowl | Grubby's Poke
Zom 100 Mbti
San Diego Padres Box Scores
Tyrone Unblocked Games Bitlife
Noelleleyva Leaks
Myhrkohls.con
E. 81 St. Deli Menu
7 National Titles Forum
Latest Posts
Article information

Author: Trent Wehner

Last Updated:

Views: 5750

Rating: 4.6 / 5 (76 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Trent Wehner

Birthday: 1993-03-14

Address: 872 Kevin Squares, New Codyville, AK 01785-0416

Phone: +18698800304764

Job: Senior Farming Developer

Hobby: Paintball, Calligraphy, Hunting, Flying disc, Lapidary, Rafting, Inline skating

Introduction: My name is Trent Wehner, I am a talented, brainy, zealous, light, funny, gleaming, attractive person who loves writing and wants to share my knowledge and understanding with you.