In [9]:
import numpy as np
import json
with open("data/super_data.json", "r") as f:
super_data = json.load(f)
p_data=super_data['papers']
index_phrase=super_data['index_phrase']
In [10]:
# build pagerank graph
node_num=len(p_data)
bayes_graph=np.zeros((node_num,node_num))
bayes_rank=np.ones(node_num)
bayes_reserve=np.zeros(node_num)
markov_graph=np.zeros((node_num,node_num))
markov_rank=np.ones(node_num)
markov_reserve=np.zeros(node_num)
#node with less citations have high originality
originality=0.0000001
for p in p_data:
index=int(p['index'])
# undirected
markov_weight=np.array(p['all_cite_sim'])
z=originality
for cs in p['all_cite_sim']:
z+=cs
markov_weight/=float(z)
markov_reserve[index]=originality/float(z)
markov_graph[index][index]=originality/float(z)
i=0
for c in p['all_cite']:
markov_graph[int(c)][index]=markov_weight[i]
i+=1
# directed
bayes_weight=np.array(p['citations_sim'])
z=originality
for cs in p['citations_sim']:
z+=cs
bayes_weight/=float(z)
bayes_reserve[index]=originality/float(z)
bayes_graph[index][index]=originality/float(z)
i=0
for c in p['citations']:
bayes_graph[int(c)][index]=bayes_weight[i]
i+=1
bayes_rank=bayes_rank-bayes_reserve
markov_rank=markov_rank-markov_reserve
In [3]:
iterations=500
for i in range(iterations):
bayes_rank=np.dot(bayes_graph,bayes_rank)
for i in range(iterations):
markov_rank=np.dot(markov_graph,markov_rank)
bayes_rank+=bayes_reserve
markov_rank+=markov_reserve
bayes_rank[705]
Out[3]:
2.796697653458339
In [4]:
#sanity check
count=0
for i in range(node_num):
if(bayes_rank[i]>2.6):
count+=1
print '#'+str(count)+' '+p_data[i]['index']
print p_data[i]['title']
print 'score: '+ str(bayes_rank[i])
print ' '
print p_data[i]['abstract']
print ' '
print ' '
#1 322
Time-Dependent Reliability Analysis by a Sampling Approach to Extreme Values of Stochastic Processes
score: 3.16610132847
Maintaining high accuracy and efficiency is a challenging issue in time-dependent reliability analysis. In this work, an accurate and efficient method is proposed for limit-state functions with the following features: The limit-state function is implicit with respect to time, and its input contains stochastic processes; the stochastic processes include only general strength and stress variables, or the limit-state function is monotonic to these stochastic processes. The new method employs random sampling approaches to estimate the distributions of the extreme values of the stochastic processes. The extreme values are then used to replace the corresponding stochastic processes, and consequently the time-dependent reliability analysis is converted into its time-invariant counterpart. The commonly used time-invariant reliability method, the First Order Reliability Method, is then applied for the time-variant reliability analysis. The results show that the proposed method significantly improves the accuracy and efficiency of time-dependent reliability analysis.
#2 485
Design Preference Elicitation, Derivative-Free Optimization and Support Vector Machine Search
score: 3.31184902427
In design preference elicitation, we seek to find individuals’ design preferences, usually through an interactive process that would need only a very small number of interactions. Such a process is akin to an optimization algorithm that operates with point values of an unknown function and converges in a small number of iterations. In this paper, we assume the existence of individual preference functions and show that the elicitation task can be translated into a derivative-free optimization (DFO) problem. Different from commonly-studied DFO formulations, we restrict the outputs to binary classes discriminating sample points with higher function values from those with lower values, to capture people’s natural way of expressing preferences through comparisons. To this end, we propose a heuristic search algorithm using support vector machines (SVM) that can locate near-optimal solutions with a limited number of iterations and a small sampling size. Early experiments with test functions show reliable performance when the function is not noisy. Further, SVM search appears promising in design preference elicitation when the dimensionality of the design variable domain is relatively high.
#3 488
Computation of the Usage Contexts Coverage of a Jigsaw With CSP Techniques
score: 3.40744132253
In the context of the Usage Context Based Design (UCBD) of a product-service, a taxonomy of variables is suggested to setup the link between the design parameters of a product-service and the part of a set of expected usages that may be covered. This paper implements a physics-based model to provide a performance prediction for each usage context that also depends on the user skill. The physics describing the behavior and consequently the performances of a jigsaw are established. Simulating numerically the usage coverage is non trivial for two reasons: the presence of circular references in physical relations and the need to efficiently propagate value sets or domains instead of accurate values. For these two reasons, we modeled the usage coverage issue as a Constraint Satisfaction Problem and we result in the expected service performances and a value of a covered usage indicator.
#4 504
Bayesian Network Classifiers for Set-Based Collaborative Design
score: 3.5923304191
Complex design problems are typically decomposed into smaller design problems that are solved by domain-specific experts who must then coordinate their solutions into a satisfactory system-wide solution. In set-based collaborative design, collaborating engineers coordinate themselves by communicating multiple design alternatives at each step of the design process. The goal in set-based collaborative design is to spend additional resources exploring multiple options in the early stages of the design process, in exchange for less iteration in the latter stages, when iterative rework tends to be most expensive. Several methods have been proposed for representing sets of designs, including intervals, surrogate models, fuzzy membership functions, and probability distributions. In this paper, we introduce the use of Bayesian networks for capturing sets of promising designs, thereby classifying the design space into satisfactory and unsatisfactory regions. The method is compared to intervals in terms of its capacity to accurately classify satisfactory design regions as a function of the number of available data points. A simplified, multilevel design problem for an unmanned aerial vehicle is presented as the motivating example.
#5 517
An Extended Pattern Search Approach to Wind Farm Layout Optimization
score: 3.40711996436
An extended pattern search approach is presented for optimizing the placement of wind turbines on a wind farm. The algorithm will develop a two-dimensional layout for a given number of turbines, employing an objective function that minimizes costs while maximizing the total power production of the farm. The farm cost is developed using an established simplified model that is a function of the number of turbines. The power development of the farm is estimated using an established simplified wake model, which accounts for the aerodynamic effects of turbine blades on downstream wind speed, to which the power output is directly proportional. The interaction of the turbulent wakes developed by turbines in close proximity largely determines the power capability of the farm. As pattern search algorithms are deterministic, multiple extensions are presented to aid escaping local optima by infusing stochastic characteristics into the algorithm. This stochasticity improves the algorithm’s performance, yielding better results than purely deterministic search methods. Three test cases are presented: a) constant, unidirectional wind, b) constant, multidirectional wind, and c) varying, multidirectional wind. Resulting layouts developed by this extended pattern search algorithm develop more power than previously explored algorithms with the same evaluation models and objective functions. In addition, the algorithm’s layouts motivate a heuristic that yields the best layouts found to date.
#6 525
Optimizing the Unrestricted Placement of Turbines of Differing Rotor Diameters in a Wind Farm for Maximum Power Generation
score: 3.18596814145
This paper presents a new method (the Unrestricted Wind Farm Layout Optimization (UWFLO)) of arranging turbines in a wind farm to achieve maximum farm efficiency. The powers generated by individual turbines in a wind farm are dependent on each other, due to velocity deficits created by the wake effect. A standard analytical wake model has been used to account for the mutual influences of the turbines in a wind farm. A variable induction factor, dependent on the approaching wind velocity, estimates the velocity deficit across each turbine. Optimization is performed using a constrained Particle Swarm Optimization (PSO) algorithm. The model is validated against experimental data from a wind tunnel experiment on a scaled down wind farm. Reasonable agreement between the model and experimental results is obtained. A preliminary wind farm cost analysis is also performed to explore the effect of using turbines with different rotor diameters on the total power generation. The use of differing rotor diameters is observed to play an important role in improving the overall efficiency of a wind farm.
#7 705
Port-Based Ontology Modeling for Product Conceptual Design
score: 2.79669765346
Ontology has been known as an important means to represent design knowledge in product development, however, most ontology creation has not yet been systematically carried out. Port, as the location of intended interaction between a component and its enviornment, plays an important role in product conceptual design. It constitutes the interface of a component and defines its boundary. This paper introduces an approach, it is convenient to abstractly represent the intended exchange of signals, energy and/or material, and creat and manage port-based domain ontology, to port-based ontology modeling (PBOM) for product conceptual design. In this paper, port concept and port functional description through using natural language are first presented and their semantic synthesis is used to describe port ontology. Secondly, an ontology repository which contains the assorted primitive concepts and primitive knowledge to map the component connections and interactions is built. Meanwhile a model of port-based multi-views which contains functional view, behavior view and configuration view is articulated, and the attributes and taxonomy of ports in a hierarchy are presented. Next, a port-based ontology language (PBOL) is described to represent the process of port ontology refinement, and a port-based FBS modeling framework is constructed to describe system configuration. Furthermore, a formal knowledge framework to manage comprehensive knowledge is proposed, which could help designers create, edit, organize, represent and visualize product knowledge. Finally, a revised tape case is employed to validate the efficiency of the port ontology for product conceptual design and illustrate its application.
#8 713
Bayesian Reliability Analysis With Evolving, Insufficient, and Subjective Data Sets
score: 2.71254969159
A primary concern in product design is ensuring high system reliability amidst various uncertainties throughout a product life-cycle. To achieve high reliability, uncertainty data for complex product systems must be adequately collected, analyzed, and managed throughout the product life-cycle. However, despite years of research, system reliability assessment is still difficult, mainly due to the challenges of evolving, insufficient, and subjective data sets. Therefore, the objective of this research is to establish a new paradigm of reliability prediction that enables the use of evolving, insufficient, and subjective data sets (from expert knowledge, customer survey, system inspection & testing, and field data) over the entire product life-cycle. This research will integrate probability encoding methods to a Bayesian updating mechanism. It is referred to as Bayesian Information Toolkit (BIT). Likewise, Bayesian Reliability Toolkit (BRT) will be created by incorporating reliability analysis to the Bayesian updating mechanism. In this research, both BIT and BRT will be integrated to predict reliability even with evolving, insufficient, and subjective data sets. It is shown that the proposed Bayesian reliability analysis can predict the reliability of door closing performance in a vehicle body-door subsystem where the relevant data sets availability are limited, subjective, and evolving.
#9 721
Transformation Facilitators: A Quantitative Analysis of Reconfigurable Products and Their Characteristics
score: 3.53355888509
Products that transform into multiple states give access to greater flexibility and functionality in a single system. These “transformers” capture the imagination and can be elegant, compact, and convenient. Mechanical transformers are usually designed
#10 734
Evaluating the Performance of Visual Steering Commands for User-Guided Pareto Frontier Sampling During Trade Space Exploration
score: 3.99999895626
Trade space exploration is a promising decision-making paradigm that provides a visual and more intuitive means for formulating, adjusting, and ultimately solving design optimization problems. This is achieved by combining multi-dimensional data visualization techniques with visual steering commands to allow designers to “steer” the optimization process while searching for the best, or Pareto optimal, designs. In this paper, we compare the performance of different combinations of visual steering commands implemented by two users to a multi-objective genetic algorithm that is executed “blindly” on the same problem with no human intervention. The results indicate that the visual steering commands — regardless of the combination in which they are invoked — provide a 4x–7x increase in the number of Pareto solutions that are obtained when the human is “in-the-loop” during the optimization process. As such, this study provides the first empirical evidence of the benefits of interactive visualization-based strategies to support engineering design optimization and decision-making. Future work is also discussed.
#11 790
An Efficient Re-Analysis Methodology for Probabilistic Vibration of Large-Scale Structures
score: 4.34669760058
It is challenging to perform probabilistic analysis and design of large-scale structures because it requires repeated finite-element analyses of large models and each analysis is expensive. This paper presents a methodology for probabilistic analysis and reliability-based design optimization of large-scale structures that consists of two re-analysis methods; one for estimating the deterministic vibratory response and another for estimating the probability of the response exceeding a certain level. Deterministic re-analysis can analyze efficiently large-scale finite element models consisting of tens or hundreds of thousand degrees of freedom and large numbers of design variables that vary in a wide range. Probabilistic re-analysis calculates very efficiently the system reliability for different probability distributions of the design variables by performing a single Monte Carlo simulation. The methodology is demonstrated on probabilistic vibration analysis and a reliability-based design optimization of a realistic vehicle model. It is shown that computational cost of the proposed reanalysis method for a single reliability analysis is about 1/20th of the cost of the same analysis using NASTRAN. Moreover, the probabilistic re-analysis approach enables a designer to perform reliability-based design optimization of the vehicle at a cost almost equal to that of a single reliability analysis. Without using the probabilistic re-analysis approach, it would be impractical to perform reliability-based design optimization of the vehicle.
#12 823
Design Optimization of a Laptop Computer Using Aggregate and Mixed Logit Demand Models With Consumer Survey Data
score: 5.79734747482
Laptop computers are designed in a variety of shapes and sizes in order to satisfy diverse consumer preferences. Each design is optimized to attract consumers with a particular set of preferences for design tradeoffs. Gaining a better understanding of these tradeoffs and preferences is beneficial to both laptop designers and to consumers. This paper introduces an engineering model for laptop computer design and a demand model derived from a main-effects choice-based conjoint survey. Several demand model specifications are compared, including linear-in-parameters and discrete part-worth specifications for aggregate multinomial logit and mixed logit models. An integrated optimization scheme combines the engineering model with each demand model form for profit maximization. The solutions of different optimal laptop designs and market share predictions resulting from the unique characteristics of each demand model specification are examined and compared.
#13 825
Measurement of Headlight Form Preference Using a Choice Based Conjoint Analysis
score: 3.70823237522
The measurement and understanding of user aesthetic preference for form is a critical element to the product development process and has been a design challenge for many years. In this article preference is represented in a utility function directly related to the engineering representation for the automobile headlight. A method is proposed to solicit and measure customer preferences for shape of the automobile headlight using a choice task on a main-effects conjoint survey design to discover and design the most preferred shape.
#14 827
Preference Inconsistency in Multidisciplinary Design Decision Making
score: 3.70456157563
Research from behavioral psychology and experimental economics asserts that individuals construct preferences on a case-by-case basis when called to make a decision. A common, implicit assumption in engineering design is that user preferences exist a priori. Thus, preference elicitation methods used in design decision making can lead to preference inconsistencies across elicitation scenarios. This paper offers a framework for understanding preference inconsistencies, within and across individual users. We give examples of three components of this new framework: comparative, internal, and external inconsistencies across users. The examples demonstrate the impact of inconsistent preference construction on common engineering and marketing design methods, including discrete choice analysis, modeling stated vs. revealed preferences, and the Kano method and thus QFD. Exploring and explaining preference inconsistencies produces new understandings of the relationship between user and product.
#15 872
Optimal Partitioning and Coordination Decisions in Decomposition-Based Design Optimization
score: 7.99999127518
Solution of complex system design problems using distributed, decomposition-based optimization methods requires determination of appropriate problem partitioning and coordination strategies. Previous optimal partitioning techniques have not addressed the coordination issue explicitly. This article presents a formal approach to simultaneous partitioning and coordination strategy decisions that can provide insights on whether a decomposition-based method will be effective for a given problem. Pareto-optimal solutions are generated to quantify tradeoffs between the sizes of subproblems and coordination problems, as measures of the computational costs resulting from different partitioning-coordination strategies. Promising preliminary results with small test problems are presented. The approach is illustrated on an electric water pump design problem.
#16 876
Diagonal Quadratic Approximation for Parallelization of Analytical Target Cascading
score: 3.26866515833
Analytical Target Cascading (ATC) is an effective decomposition approach used for engineering design optimization problems that have hierarchical structures. With ATC, the overall system is split into subsystems, which are solved separately and coordinated via target/response consistency constraints. As parallel computing becomes more common, it is desirable to have separable subproblems in ATC so that each subproblem can be solved concurrently to increase computational throughput. In this paper, we first examine existing ATC methods, providing an alternative to existing nested coordination schemes by using the block coordinate descent method (BCD). Then we apply diagonal quadratic approximation (DQA) by linearizing the cross term of the augmented Lagrangian function to create separable subproblems. Local and global convergence proofs are described for this method. To further reduce overall computational cost, we introduce the truncated DQA (TDQA) method that limits the number of inner loop iterations of DQA. These two new methods are empirically compared to existing methods using test problems from the literature. Results show that computational cost of nested loop methods is reduced by using BCD and generally the computational cost of the truncated methods, TDQA and ALAD, are superior to other nested loop methods with lower overall computational cost than the best previously reported results.
#17 897
An Extension of the Commonality Index for Product Family Optimization
score: 3.8250354349
One critical aim of product family design is to offer distinct variants that attract a variety of market segments while maximizing the number of common parts to reduce manufacturing cost. Several indices have been developed for measuring the degree of commonality in existing product lines to compare product families or assess improvement of a redesign. In the product family optimization literature, commonality metrics are used to define the multi-objective tradeoff between commonality and individual variant performance. These
#18 946
Improved Head Restraint Design for Safety and Compliance
score: 5.99999147314
The National Highway Traffic Safety Administration (NHTSA) recently revised Federal Motor Vehicle Safety Standard (FMVSS) 202, which governs head restraints. The new standard, known as FMVSS 202a, establishes for the first time in the U.S. a requirement for the fore-aft position of the head restraint. The fore-aft distance between the head restraint and headform representing a midsize male occupant must not exceed 55 mm when measured with the seat back angle set to 25 degrees. The goal of the rule change is to reduce the incidence of whiplash-associated disorders caused by rear impacts. Moving the head restraint closer to the head prior to impact decreases the amount of relative motion between the occupants’ heads and torsos and is believed to decrease the risk of soft-tissue neck injury. As manufacturers phase in seats that meet the new criterion, some vehicle models are producing complaints from drivers that the head restraint causes discomfort by interfering with their preferred head position, forcing them to select a more reclined seat back angle than they would prefer. To address this issue, an analysis of driver head locations relative to the seat was conducted using a new optimization-based framework for vehicle interior optimization. The approach uses simulations with thousands of virtual occupants to quantity distributions of postural variables of interest. In this case, the analysis showed that smaller-stature occupants are disproportionately likely to experience head-position interference from a head restraint that is rigidly affixed to the seat back. Using an analysis approach that considers both postural and anthropometric variability, design guidelines for the kinematics of an articulated head restraint are proposed. Such a restraint would provide optimal head restraint positioning across occupant sizes while minimizing interference.
#19 965
Data Mining and Fuzzy Clustering to Support Product Family Design
score: 3.86793497744
In mass customization, data mining can be used to extract valid, previously unknown, and easily interpretable information from large product databases in order to improve and optimize engineering design and manufacturing process decisions. A product family is a group of related products based on a product platform, facilitating mass customization by providing a variety of products for different market segments cost-effectively. In this paper, we propose a method for identifying a platform along with variant and unique modules in a product family using data mining techniques. Association rule mining is applied to develop rules related to design knowledge based on product function, which can be clustered by their similarity based on functional features. Fuzzy c-means clustering is used to determine initial clusters that represent modules. The clustering result identifies the platform and its modules by a platform level membership function and classification. We apply the proposed method to determine a new platform using a case study involving a power tool family.
#20 973
A Kriging Metamodel Assisted Multi-Objective Genetic Algorithm for Design Optimization
score: 2.88115819264
The high computational cost of population based optimization methods, such as multi-objective genetic algorithms, has been preventing applications of these methods to realistic engineering design problems. The main challenge is to devise methods that can significantly reduce the number of computationally intensive simulation (objective/constraint functions) calls. We present a new multi-objective design optimization approach in that kriging-based metamodeling is embedded within a multi-objective genetic algorithm. The approach is called Kriging assisted Multi-Objective Genetic Algorithm, or K-MOGA. The key difference between K-MOGA and a conventional MOGA is that in K-MOGA some of the design points or individuals are evaluated by kriging metamodels, which are computationally inexpensive, instead of the simulation. The decision as to whether the simulation or their kriging metamodels to be used for evaluating an individual is based on checking a simple condition. That is, it is determined whether by using the kriging metamodels for an individual the non-dominated set in the current generation is changed. If this set is changed, then the simulation is used for evaluating the individual; otherwise, the corresponding kriging metamodels are used. Seven numerical and engineering examples with different degrees of difficulty are used to illustrate applicability of the proposed K-MOGA. The results show that on the average, K-MOGA converges to the Pareto frontier with about 50% fewer number of simulation calls compared to a conventional MOGA.
#21 997
Design and Verification of a New Computer Controlled Seating Buck
score: 3.25474208059
Appraising vehicle package design concepts using seating bucks — physical prototypes representing vehicle package, is an integral part of the vehicle package design process. Building such bucks is costly and may impose substantial burden on the vehicle design cycle time. Further, static seating bucks lack the flexibility to accommodate design iterations during the gradual progression of a vehicle program. A “Computer controlled seating buck”, as described in this paper, is a quick and inexpensive alternative to the traditional seating bucks with the desired degree of fidelity. It is particularly useful to perform package and ergonomic studies in the early stages of a vehicle program, long before the data is available to build a traditional seating buck. Such a seating buck has been developed to accommodate Ford vehicle package design needs. This paper presents the functional requirements, the high level conceptual design of how these requirements are realized, and the methods to verify, improve and sustain the dimensional accuracy and capability of the new computer controlled seating buck.
#22 1009
Flexible Product Platforms: Framework and Case Study
score: 7.33088869147
Customization and market uncertainty require increased functional and physical bandwidth in product platforms. This paper presents a platform design process in response to such future uncertainty. The process consists of seven iterative steps and is applied to an automotive body-in-white (BIW) where 10 out of 21 components are identified as potential candidates for embedding flexibility. The method shows how to systematically pinpoint and value flexible elements in platforms. This allows increased product family profit despite uncertain variant demand and specification changes. We show how embedding flexibility suppresses change propagation and lowers switch costs, despite an increase of 34% in initial investment for equipment and tooling. Monte Carlo simulation results for 12 future scenarios reveal the value of embedding flexibility.
#23 1030
Engineering Product Design Optimization for Retail Channel Acceptance
score: 12.3041712747
Significant recent research has focused on the marriage of consumer preferences and engineering design in order to improve profitability. The extant literature has neglected the effects of channel markets which are increasingly prevalent. At the crux of the issue is the fact that channel dominating retailers, like Wal-Mart, have the ability to unilaterally control manufacturer production decisions as gatekeepers to the consumer or market. In this paper, we propose a new methodology that accounts for this power asymmetry. A chance constrained framework is used to model retailer acceptance of possible engineering designs and accounts for the important effect on the profitability of the retailer’s assortment through a latent class estimation of demand from conjoint surveys. Our approach allows the manufacturer to optimize a product design for profitability while reliably ensuring that the product will make it to market by making the retailer more profitable with the addition of the new product. As a demonstrative example, we apply the proposed approach for product design selection in the case of an angle grinder. For this example, we analyze the market and are able to improve expected manufacturer profitability while simultaneously presenting the decision maker with tradeoffs between slotting allowances, market share, and risk of retailer acceptance.
#24 1050
Visual Representations as an Aid to Concept Generation
score: 3.01838740443
This paper describes our initial efforts to develop a 3D visualization tool that is part of an overall effort to create a Concept Generator, an automated conceptual design tool, to aid a designer during the early stages of the design process. The use of CAD software has diversified into various disciplines that have made use of simulation and software modeling tools for reasons that range from improving accuracy in design, reduction in lead times, and simple visualizations. The impacts of CAD software have been beneficial in industry and education. Described in this paper is the use of low-memory VRML models to represent components. These low memory models have been created to achieve several goals that complement the overall objectives of the concept generator. One key goal is that the concept generator be accessible via the web, thus the need for low-memory and low-data models. Additionally, as the concept generator is intended for usage during early conceptual design, the 3D visualization tool allows the creation of models upon which basic manipulations can be performed so that a designer can get an initial feel of the structure that his product is going to take. Our research has enabled us to create a basic visualization tool which, while similar in nature to most other CAD software tools, is unique, in that it represents the link, as a visual interface, between a formulated concept and the designer. The paper presents the research problem, an overview of the architecture of the software tool and some preliminary results on visual representations as an aid to concept generation.
#25 1069
Manufacturing Investment and Allocation in Product Line Design Decision-Making
score: 4.50048726219
An important aspect of product development is design for manufacturability (DFM) analysis that aims to incorporate manufacturing requirements into early product decision-making. Existing methods in DFM seldom quantify explicitly the tradeoffs between revenues and costs generated by making design choices that may be desirable in the market but costly to manufacture. This paper builds upon previous work coordinating models for engineering design and marketing product line decision-making by incorporating quantitative models of manufacturing investment and production allocation. The result is a methodology that considers engineering design decisions quantitatively in the context of manufacturing and market consequences in order to resolve tradeoffs, not only among performance objectives, but also between market preferences and manufacturing cost.
#26 1080
Methods for Discrete Design Optimization
score: 2.65555045295
One area in design optimization is component based design where the designer has to choose between many different discrete alternatives. These types of problems have discrete character and in order to admit optimization an interpolation between the alternatives is often performed. However, in this paper a modified version of the non-gradient algorithm the Complex method is developed where no interpolation between alternatives is needed. Furthermore, the optimization algorithm itself is optimized using a performance metric that measures the effectiveness of the algorithm. In this way the optimal performance of the proposed discrete Complex method has been identified. Another important area in design optimization is the case of optimization based on simulations. For such problems no gradient information is available, hence non-gradient methods are therefore a natural choice. The application for this paper is the design of an industrial robot where the system performance is evaluated using comprehensive simulation models. The objective is to maximize performance with constraints on lifetime and cost, and the design variables are discrete choices of gear boxes for the different axes.
#27 1084
Heuristic Gradient Projection for 3D Space Frame Optimization
score: 2.70926319844
The purpose of this work is to develop a novel optimization process for the design of space frames. The main objective is to minimize the space frame volume and consider stress constraints satisfaction. A finite element program is devised to synthesize 3D-space frames and aid in its topology optimization. The program is verified through different elementary problems with known analytical solutions as well as with commercial packages. A Midi-Bus frame is modeled with about 300 members and analyzed for a severe road model condition. The optimization effectively uses the devised Heuristic Gradient Projection (HGP) technique to synthesize the optimum Midi-Bus frame. Results indicate a marked improvement over available designs and remarkably faster convergence over other optimization techniques. This technique can thus be effectively applied to other large 3D space frame synthesis and optimization.
#28 1093
A Parallel Grammar for Simulation-Driven Mechanical Design Synthesis
score: 3.99999645232
This research investigates the use of quantitative measures of performance to aid the grammatical synthesis of mechanical systems. Such performance measures enable search algorithms to be used to find designs that meet requirements and optimize performance by using automatically generated performance feedback, including behavioral simulation, as a guide. The work builds on a new type of production system, a parallel grammar for mechanical systems based on a Function-Behavior-Structure representation, to generate an extensive variety of designs. Geometric and topological constraints are used to bound the design space, termed the language of the grammar, to ensure the validity of designs generated. The winding mechanism of an electromechanical camera is examined as a case study using the behavioral modeling language Modelica. Behavioral simulations are run for parametric models generated by the parallel grammar and this data is used, in addition to geometric performance metrics, for performance evaluation of generated alternative designs. Multi-objective stochastic search, in the form of a hybrid pattern search developed as part of this research, is used to generate Pareto sets of optimally directed designs of winding mechanisms, showing the design of the camera chosen for the case study to be optimally directed with respect to the design objectives considered. The Pareto sets generated illustrate the range of simulation-driven solutions that can be generated and simulated automatically as well as their performance tradeoffs.
#29 1197
A Comparison of Commonality Indices for Product Family Design
score: 5.51688187299
Today’s highly competitive and global marketplace is redefining the way companies do business: many companies are being faced with the challenge of providing as much variety as possible for the market with as little variety as possible between products. In order to achieve this, product families have been developed, allowing the realization of a sufficient variety of products to meet the customers’ demands while keeping costs relatively low. The challenge when designing a family of products is in resolving the tradeoff between product commonality and distinctiveness: if commonality is too high, products lack distinctiveness, and their individual performance is not optimized; on the other hand, if commonality is too low, manufacturing costs will increase dramatically. Toward this end, several commonality indices have been proposed to assess the amount of commonality within a product family. In this paper, we compare and contrast six of the commonality indices from the literature based on their ease of data collection, repeatability and consistency. Eight families of products are dissected and analyzed, and the commonality of each product family is computed using each commonality index. The results are then analyzed and compared, and recommendations are given on their usefulness for product family design. This study lays a foundation for understanding the relationship between different platform leveraging strategies and the resulting degree of commonality within a product family.
#30 1206
Development of a Production Cost Estimation Framework for Product Family Design
score: 2.80272506899
The main task of a product family designer is to decide the right components/design variables to share among products to maintain economies of scale with minimum sacrifice in the performance of each product in the family. The decisions are usually based on several criteria, but production cost is of primary concern. Estimating the production cost of a family of products involves estimating the production cost of each product in the family including the cost effects of common and variant components/design variables in the family. In this paper, we introduce a production cost estimation framework for product family design based on Activity-Based Costing (ABC), which is composed of three stages: (1) allocation, (2) estimation, and (3) analysis. In the allocation stage, the production activities that are necessary to produce all of the products in the family are identified and modeled with an activity table, a resource table, and an activity flow. To allocate the activities to products, a product family structure is represented by a hierarchical classification of the items that form the product family. In the estimation stage, production costs are estimated by converting the production activities to costs using key cost drivers that consume main resources. In the analysis stage, components/design variables for product family design are investigated with resource sharing methods through activity analysis. As an example, the proposed framework is applied to estimate the production cost of a family of cordless power screwdrivers.
#31 1213
A Parametric Approach to Vehicle Seating Buck Design
score: 3.4849282226
Vehicle package development is an important part of the entire vehicle design. It consists of determining the occupant’s spatial environment, the vehicle’s mechanical spatial configuration and the overall exterior/interior dimensions while meeting the engineering requirements, including packaging, structure, manufacturing, etc. Developing and verifying the occupant compartment configuration is usually conducted by using a seating buck. To build a seating buck, vehicle interior surfaces are generated in CAD using vehicle exterior surfaces, package layouts and master sections. During early program stages, this information is scattered, incomplete and constantly changing, which makes the seating buck creation challenging and the package design decision-making more difficult. A new method has been developed to quickly generate the seating buck surfaces from scattered information. It has shown to significantly reduce the time conventionally required for the seating buck surface modeling. This paper documents the method and process and summarizes the potential of the method and its impact on vehicle package design.
#32 1222
A Single-Loop Method for Reliability-Based Design Optimization
score: 19.9365595651
Reliability-Based Design Optimization (RBDO) can provide optimum designs in the presence of uncertainty. It can therefore, be a powerful tool for design under uncertainty. The traditional, double-loop RBDO algorithm requires nested optimization loops, where the design optimization (outer) loop, repeatedly calls a series of reliability (inner) loops. Due to the nested optimization loops, the computational effort can be prohibitive for practical problems. A single-loop RBDO algorithm is proposed in this paper for both normal and non-normal random variables. Its accuracy is the same with the double-loop approach and its efficiency is almost equivalent to deterministic optimization. It collapses the nested optimization loops into an equivalent single-loop optimization process by imposing the Karush-Kuhn-Tucker optimality conditions of the reliability loops as equivalent deterministic equality constraints of the design optimization loop. It therefore, converts the probabilistic optimization problem into an equivalent deterministic optimization problem, eliminating the need for calculating the Most Probable Point (MPP) in repeated reliability assessments. Several numerical applications including an automotive vehicle side impact example, demonstrate the accuracy and superior efficiency of the proposed single-loop RBDO algorithm.
#33 1225
A Saddlepoint Approximation Method for Uncertainty Analysis
score: 7.54852438837
The availability of computationally efficient and accurate methods for probabilistic computation is crucial to the success of applications of probabilistic design using complex engineering simulation models. To address this need, a Saddlepoint Approximation method for probabilistic engineering analysis is introduced. A general performance function is approximated at the Most Likelihood Point with either linear or quadratic forms and the Saddlepoint Approximation is then applied to evaluate the probability associated with the performance. The proposed approach provides highly accurate probabilistic results while maintaining minimum computational requirement. Two examples are presented to demonstrate the effectiveness of the proposed method.
#34 1240
Convergence and Stability in Distributed Design of Large Systems
score: 4.8562707029
Decentralized systems constitute a special class of design under distributed environments. They are characterized as large and complex systems divided into several smaller entities that have autonomy in local optimization and decision-making. The mechanisms behind this network of decentralized design decisions create difficult management and coordination issues. Standard techniques to modeling and solving decentralized design problems typically fail to understand the underlying dynamics of the decentralized processes and therefore result in suboptimal solutions. This paper aims to model and understand the mechanisms and dynamics behind a decentralized set of decisions within a complex design process. This paper builds on already existing results of convergence in decentralized design for simple problems to extend them to any kind of quadratic decentralized system. This involves two major steps: developing the convergence conditions for the distributed optimization problem, and finding the equilibrium points of the design space. Illustrations of the results are given in the form of hypothetical decentralized examples.
#35 1245
Hierarchical Arrangement of Characteristics in Product Design Optimization Problems for Deeper Insight Into Optimized Results
score: 3.9658422833
This paper proposes a design optimization method for machine products that is based on the decomposition of performance characteristics, or alternatively, extraction of simpler characteristics, to accommodate the specific features or difficulties of a particular design problem. The optimization problem is expressed using hierarchical constructions of the decomposed and extracted characteristics and the optimizations are sequentially repeated, starting with groups of characteristics having conflicting characteristics at the lowest hierarchical level and proceeding to higher levels. The proposed method not only effectively enables achieving optimum design solutions, but also facilitates deeper insight into the design optimization results, and aids obtaining ideas for breakthroughs in the optimum solutions. An applied example is given to demonstrate the effectiveness of the proposed method.
#36 1277
Analytical Variance-Based Global Sensitivity Analysis in Simulation-Based Design Under Uncertainty
score: 4.51498473225
The importance of sensitivity analysis in engineering design cannot be over-emphasized. In design under uncertainty, sensitivity analysis is performed with respect to the probabilistic characteristics. Global sensitivity analysis (GSA), in particular, is used to study the impact of variations in input variables on the variation of a model output. One of the most challenging issues for GSA is the intensive computational demand for assessing the impact of probabilistic variations. Existing variance-based GSA methods are developed for general functional relationships but require a large number of samples. In this work, we develop an efficient and accurate approach to GSA that employs analytic formulations derived from metamodels of engineering simulation models. We examine the types of GSA needed for design under uncertainty and derive generalized analytical formulations of GSA based on a variety of metamodels commonly used in engineering applications. The benefits of our proposed techniques are demonstrated and verified through both illustrative mathematical examples and the robust design for improving vehicle handling performance.
#37 1278
Building Surrogate Models Based on Detailed and Approximate Simulations
score: 2.9587944326
Preliminary design of a complex system often involves exploring a large design space. This may require repeated use of computationally expensive simulations. To ease the computational burden, surrogate models are built to provide rapid approximations of more expensive models. However, the surrogate models themselves are often expensive to build because they are based on repeated experiments with computationally expensive simulations. An alternative approach is to replace the detailed simulations with simplified approximate simulations, thereby sacrificing accuracy for reduced computational time. Naturally, surrogate models built from these approximate simulations will also be imprecise. A strategy is needed for improving the precision of surrogate models based on approximate simulations without significantly increasing computational time. In this paper, a new approach is taken to integrate data from approximate and detailed simulations to build a surrogate model to describe the relationship between output and input parameters. Experimental results from approximate simulations form the bulk of the data, and they are used to build a model based on a Gaussian process. The fitted model is then ‘adjusted’ by incorporating small amounts of data from detailed simulations to obtain a more accurate prediction model. The effectiveness of this approach is demonstrated with a design application for a cellular material that is used to cool a microprocessor. The emphasis is on the method and not on the results
#38 1284
A Sequential Exploratory Experimental Design Method: Development of Appropriate Empirical Models in Design
score: 2.76932936313
Much of today’s engineering analysis work consists of running complex computer codes (simulation programs), in which a vector of responses are obtained when values of design variables are supplied. To save time and effort in simulation, sampling (design of experiments) techniques are applied to help develop metamodels (empirical models or surrogate models) that can be used to replace the expensive simulations in future design stages. The usage of metamodels also helps designers to integrate inter-disciplinary codes and grasp the relationship between inputs and outputs. In this paper, we focus on a very important topic in studies of sampling and metamodeling techniques, i.e., the sequential design of experiments and metamodeling; the research question is: How to design sequential computer experiments to get accurate metamodels? After discussion of design and metamodeling strategies, a Sequential Exploratory Experimental Design (SEED) method is developed to help identify data points at different stages in metamodeling. Given limited resources, it is expected that more accurate metamodels can be developed with SEED. A single-variable example is used to help illustrate the SEED method.
#39 1299
Reliability-Based Design With the Mixture of Random and Interval Variables
score: 8.34336560567
In Reliability-Based Design (RBD), uncertainties usually imply for randomness. Nondeterministic variables are assumed to follow certain probability distributions. However, in real engineering applications, some of distributions may not be precisely known or uncertainties associated with some uncertain variables are not from randomness. These nondeterministic variables are only known within intervals. In this paper, a method of RBD with the mixture of random variables with distributions and uncertain variables with intervals is proposed. The reliability is considered under the condition of the worst combination of interval variables. In comparison with traditional RBD, the computational demand of RBD with the mixture of random and interval variables increases dramatically. To alleviate the computational burden, a sequential single-loop procedure is developed to replace the computationally expensive double-loop procedure when the worst case scenario is applied directly. With the proposed method, the RBD is conducted within a series of cycles of deterministic optimization and reliability analysis. The optimization model in each cycle is built based on the Most Probable Point (MPP) and the worst case combination obtained in the reliability analysis in previous cycle. Since the optimization is decoupled from the reliability analysis, the computational amount for MPP search is decreased to the minimum extent. The proposed method is demonstrated with a structural design example.
#40 1301
Structural Durability Design Optimization and Its Reliability Assessment
score: 2.96611932788
Mechanical fatigue subject to external and inertia transient loads in the service life of mechanical systems often leads a structural failure due to accumulated damage. Structural durability analysis that predicts the fatigue life of mechanical components subject to dynamic stresses and strains is a compute intensive multidisciplinary simulation process, since it requires an integration of several computer-aided engineering tools and large amount of data communication and computation. Uncertainties in geometric dimensions due to manufacturing tolerances cause the indeterministic nature of fatigue life of the mechanical component. Due to the fact that uncertainty propagation to structural fatigue under transient dynamic loading is not only numerically complicate but also extremely expensive, it is a challenging task to develop structural durability-based design optimization process and reliability analysis to ascertain whether the optimal design is reliable. The objective of this paper is development of an integrated CAD-based computer-aided engineering process to effectively carry out the design optimization for a structural durability, yielding a durable and cost-effectively manufacturable product. In addition, a reliability analysis is executed to assess the reliability for the deterministic optimal design.
#41 1305
Non-Gradient Based Parameter Sensitivity Estimation for Robust Design Optimization
score: 2.92233736114
We present a method for estimating the parameter sensitivity of a design alternative for use in robust design optimization. The method is non-gradient based: it is applicable even when the objective function of an optimization problem is non-differentiable and/or discontinuous with respect to the parameters. Also, the method does not require a presumed probability distribution for parameters, and is still valid when parameter variations are large. The sensitivity estimate is developed based on the concept that associated with each design alternative there is a region in the parameter variation space whose properties can be used to predict that design’s sensitivity. Our method estimates such a region using a worst-case scenario analysis and uses that estimate in a bi-level robust optimization approach. We present a numerical and an engineering example to demonstrate the applications of our method.
#42 1309
Production Cost Modeling to Support Product Family Design Optimization
score: 5.11151774619
Product family design involves carefully balancing the commonality of the product platform with the distinctiveness of the individual products in the family. While a variety of optimization methods have been developed to help designers determine the best design variable settings for the product platform and individual products within the family, production costs are thought to be an important criterion to choose the best platform among candidate platform designs. Thus, it is prerequisite to have an appropriate production cost model to be able to estimate the production costs incurred by having common and variant components within a product family. In this paper, we propose a production cost model based on a production cost framework associated with the manufacturing activities. The production cost model can be easily integrated within optimization frameworks to support a Decision-Based Design approach for product family design. As an example, the production cost model is utilized to estimate the production costs of a family of cordless power screwdrivers.
#43 1347
Analysis of Support Vector Regression for Approximation of Complex Engineering Analyses
score: 4.35314222351
A variety of metamodeling techniques have been developed in the past decade to reduce the computational expense of computer-based analysis and simulation codes. Metamodeling is the process of building a “model of a model” that provides a fast surrogate for a computationally expensive computer code. Common metamodeling techniques include response surface methodology, kriging, radial basis functions, and multivariate adaptive regression splines. In this paper, we present Support Vector Regression (SVR) as an alternative technique for approximating complex engineering analyses. The computationally efficient theory behind SVR is presented, and SVR approximations are compared against the aforementioned four metamodeling techniques using a testbed of 22 engineering analysis functions. SVR achieves more accurate and more robust function approximations than these four metamodeling techniques and shows great promise for future metamodeling applications.
#44 1348
An Efficient Algorithm for Constructing Optimal Design of Computer Experiments
score: 5.74007531389
Metamodeling approach has been widely used due to the high computational cost of using high-fidelity simulations in engineering design. The accuracy of metamodels is directly related to the experimental designs used. Optimal experimental designs have been shown to have good “space filling” and projective properties. However, the high cost in constructing them limits their use. In this paper, a new algorithm for constructing optimal experimental designs is developed. There are two major developments involved in this work. One is on developing an efficient global optimal search algorithm, named as enhanced stochastic evolutionary (ESE) algorithm. The other is on developing efficient algorithms for evaluating optimality criteria. The proposed algorithm is compared to two existing algorithms and is found to be much more efficient in terms of the computation time, the number of exchanges needed for generating new designs, and the achieved optimality criteria. The algorithm is also very flexible to construct various classes of optimal designs to retain certain structural properties.
#45 1370
A Study of Convergence in Decentralized Design
score: 3.5305097996
The decomposition and coordination of decisions in the design of complex engineering systems is a great challenge. Companies who design these systems routinely allocate design responsibility of the various subsystems and components to different people, teams or even suppliers. The mechanisms behind this network of decentralized design decisions create difficult management and coordination issues. However, developing efficient design processes is paramount, especially with market pressures and customer expectations. Standard techniques to modeling and solving decentralized design problems typically fail to understand the underlying dynamics of the decentralized processes and therefore result in suboptimal solutions. This paper aims to model and understand the mechanisms and dynamics behind a decentralized set of decisions within a complex design process. By using concepts from the fields of mathematics and economics, including Game Theory and the Cobweb Model, we model a simple decentralized design problem and provide efficient solutions. This new approach uses numerical series and linear algebra as tools to determine conditions for convergence of such decentralized design problems. The goal of this paper is to establish the first steps towards understanding the mechanisms of decentralized decision processes. This includes two major steps: studying the convergence characteristics, and finding the final equilibrium solution of a decentralized problem. Illustrations of the developments are provided in the form of two decentralized design problems with different underlying behavior.
#46 1373
Design Space Visualization and Its Application to a Design by Shopping Paradigm
score: 6.36088076139
We have developed a data visualization interface that facilitates a design by shopping paradigm, allowing a decision-maker to form a preference by viewing a rich set of good designs and use this preference to choose an optimal design. Design automation has allowed us to implement this paradigm, since a large number of designs can be synthesized in a short period of time. The interface allows users to visualize complex design spaces by using multi-dimensional visualization techniques that include customizable glyph plots, parallel coordinates, linked views, brushing, and histograms. As is common with data mining tools, the user can specify upper and lower bounds on the design space variables, assign variables to glyph axes and parallel coordinate plots, and dynamically brush variables. Additionally, preference shading for visualizing a user’s preference structure and algorithms for visualizing the Pareto frontier have been incorporated into the interface to help shape a decision-maker’s preference. Use of the interface is demonstrated using a satellite design example by highlighting different preference structures and resulting Pareto frontiers. The capabilities of the design by shopping interface were driven by real industrial customer needs, and the interface was demonstrated at a spacecraft design conducted by a team at Lockheed Martin, consisting of Mars spacecraft design experts.
#47 1383
Optimal Troubleshooting for Electro-Mechanical Systems
score: 2.90912080744
When a complex electromechanical system fails, the troubleshooting procedure adopted is often complex and tedious. No standard methods currently exist to optimize the sequence of steps in a troubleshooting process. The ad hoc methods generally followed are less than optimal methods and can result in high maintenance costs. This paper describes the use of behavioral models and multistage decision-making models in Bayesian networks for representing the troubleshooting process. It discusses advantages in using these methods and the difficulties in implementing them. An
#48 1467
Automated Design Synthesis for Micro-Electro-Mechanical Systems (MEMS)
score: 3.21543294328
This paper proposes a general architecture for using evolutionary algorithms to achieve MEMS design synthesis. Functional MEMS devices are designed by combining parameterized basic MEMS building blocks together using Multi-objective Genetic Algorithms (MOGAs) to produce a pareto optimal set of feasible designs. The iterative design synthesis loop is implemented by combining MOGAs with the SUGAR MEMS simulation tool. Given a high-level description of the device’s desired behavior, both the topology and sizing are generated. The topology or physical configuration includes the number and types of basic building blocks and their connectivity. The sizing of the designs entails assigning numerical values to parameterized building blocks. A sample from the pareto optimal set of designs is presented for a meandering resonator example, along with convergence plots.
#49 1485
Decomposition-Based Assembly Synthesis Based on Structural Stiffness Considerations
score: 4.21289912925
This paper presents a method for systematically decomposes product geometry into a set of components considering the structural stiffness of the end product. A structure is represented a graph of its topology, and the optimal decomposition is obtained by combining FEM analyses with a Genetic Algorithm. As a case study, the side frame of a passenger car is decomposed for the minimum distortion of the front door panel geometry, where spot-welded joints are modeled as torsional springs. First, the rates of the torsional springs are treated as constant values obtained in the literature. Second, they are treated as design variables within realistic bounds. By allowing the change in the joint rates, it is demonstrated that the optimal decomposition can achieve the smaller distortion with less amount of joint stiffness (hence less welding spots), than the optimal decomposition with the typical joint rates available in the literature.
#50 1494
On Sequential Sampling for Global Metamodeling in Engineering Design
score: 11.4151870719
Approximation models (also known as metamodels) have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is directly related to the sampling strategies used. Our goal in this paper is to investigate the general applicability of sequential sampling for creating global metamodels. Various sequential sampling approaches are reviewed and new approaches are proposed. The performances of these approaches are investigated against that of the one-stage approach using a set of test problems with a variety of features. The potential usages of sequential sampling strategies are also discussed.
#51 1497
Design of Hierarchic Platforms for Customizable Products
score: 3.28299174926
The objective in product platform design is to synthesize a set of components that will be shared by a number of product variants considering potential sacrifices in individual product performance that result from parts sharing.
#52 1498
A Quantitative Approach for Designing Multiple Product Platforms for an Evolving Portfolio of Products
score: 9.92699891558
Product variety can be provided more efficiently and effectively by creating families of products based on product platforms. One of the major advantages of the development of product platforms is the facilitation of an overall product development strategy, and an important factor in product development is the evolution of a family of products, including addition and retirement of products as well as changing demand and associated production quantities. In this paper, we present a quantitative approach for designing
#53 1500
Knowledge Intensive Support for Product Family Design
score: 2.68007601252
This paper presents an on-going research effort on platform-based product family design using a knowledge intensive support paradigm. The background research related to product family design is first reviewed. Then, the fundamental issues underlying the product family design are discussed. A module-based integrated product family design scheme is proposed with knowledge support for customer requirements’ modeling, product architecture modeling, product platform establishment, product family generation, and product assessment. The systematic methodology and the relevant technologies are investigated and developed for knowledge modeling and support in the product family design process. An information and knowledge-modeling framework is developed for the module-based product family design scheme. The issues and requirements related to develop knowledge intensive support system for module-based product family design are also addressed. Finally, a case study on knowledge support for power supply family design and evaluation is provided for illustration.
#54 1501
Platform Selection Under Performance Loss Constraints in Optimal Design of Product Families
score: 2.64610332467
Designing a family of product variants that share some components usually entails a performance loss relative to the individually optimized variants due to the commonality constraints. Choosing components for sharing may depend on what performance losses can be tolerated. This article presents a methodology for making commonality decisions while controlling individual performance losses. Previous work focused on evaluating individual performance losses due to pre-specified sharing. Trade-offs were identified for different platforms (i.e., the sets of components shared among products) by means of Pareto sets. In the present work an optimal design problem is formulated to choose product components to be shared without exceeding a user-specified performance loss tolerance. This enables the designer to control trade-offs and obtain optimal product family designs for different levels of performance losses in an attempt to maximize commonality. A family of automotive side frames is used to demonstrate the approach.
#55 1508
A Genetic Algorithm Based Method for Product Family Design Optimization
score: 5.55334091858
Increased commonality in a family of products can simplify manufacturing and reduce the associated costs and lead-times. There is a tradeoff, however, between commonality and individual product performance within a product family, and in this paper we introduce a genetic algorithm based method to help find an acceptable balance between commonality in the product family and desired performance of the individual products in the family. The method uses Design of Experiments to help screen unimportant factors and identify factors of interest to the product family and a multiobjective genetic algorithm, the non-dominated sorting genetic algorithm, to optimize the performance of the products in the resulting family. To demonstrate implementation of the proposed method, the design of a family of three General Aviation Aircraft is presented along with a product variety tradeoff study to determine the extent of the tradeoff between commonality and individual product performance within the aircraft family. The efficiency and effectiveness of the proposed method is illustrated by comparing the family of aircraft against individually optimized designs and designs obtained from an alternate gradient-based multiobjective optimization method.
#56 1528
Sequential Optimization and Reliability Assessment Method for Efficient Probabilistic Design
score: 7.06415078438
Probabilistic optimization design offers tools for making reliable decisions with the consideration of uncertainty associated with design variables/parameters and simulation models. In a probabilistic design, such as reliability-based design and robust design, the design feasibility is formulated probabilistically such that the probability of the constraint satisfaction (reliability) exceeds the desired limit. The reliability assessment for probabilistic constraints often involves an iterative procedure; therefore, two loops are involved in a probabilistic optimization. Due to the double-loop procedure, the computational demand is extremely high. To improve the efficiency of a probabilistic design, a novel method – sequential optimization and reliability assessment (SORA) is developed in this paper. The SORA method employs a single-loop strategy where a serial of cycles of optimization and reliability assessment is employed. In each cycle optimization and reliability assessment are decoupled from each other; no reliability assessment is required within optimization and the reliability assessment is only conducted after the optimization. The key concept of the proposed method is to shift the boundaries of violated deterministic constraints (with low reliability) to the feasible direction based on the reliability information obtained in the previous cycle. Hence the design is quickly improved from cycle to cycle and the computational efficiency is improved significantly. Two engineering applications, the reliability-based design for vehicle crashworthiness of side impact and the integrated reliability and robust design of a speed reducer, are presented to demonstrate the effectiveness of the SORA method.
#57 1529
An Investigation of Nonlinearity of Reliability-Based Design Optimization Approaches
score: 3.37130874293
Deterministic optimum designs that are obtained without consideration of uncertainty could lead to unreliable designs, which call for a reliability approach to design optimization, using a Reliability-Based Design Optimization (RBDO) method. A typical RBDO process iteratively carries out a design optimization in an original random space (
#58 1531
Visualization of Multidimensional Design and Optimization Data Using Cloud Visualization
score: 3.8090466189
As our ability to generate more and more data for increasingly large engineering models improves, the need for methods for managing that data becomes greater. Information management from a decision-making perspective involves being able to capture and represent significant information to a designer so that they can make effective and efficient decisions. However, most visualization techniques used in engineering, such as graphs and charts, are limited to two-dimensional representations and at most three-dimensional representations. In this paper, we present a new visualization technique to capture and represent engineering information in a multidimensional context. The new technique, Cloud Visualization, is based upon representing sets of points as clouds in both the design and performance spaces. The technique is applicable to both single and multiobjective optimization problems and the relevant issues with each type of problem are discussed. A multiobjective case study is presented to demonstrate the application and usefulness of the Cloud Visualization techniques.
In [5]:
for p in p_data:
idx=int(p['index'])
p['markov_rank']=markov_rank[idx]
p['bayes_rank']=bayes_rank[idx]
markov_ranks=np.argsort(markov_rank)[::-1]
bayes_ranks=np.argsort(bayes_rank)[::-1]
super_data['markov_ranks']=markov_ranks.tolist()
super_data['bayes_ranks']=bayes_ranks.tolist()
In [6]:
import networkx as nx
import community
G=nx.Graph()
for p in p_data:
idx=int(p['index'])
count=0
for i in p['citations']:
G.add_edge(idx, int(i),weight=p['citations_sim'][count])
count+=1
c_scores = nx.degree_centrality(G)
c_ranks=np.zeros(len(p_data))
for i in range (len(p_data)):
if i in c_scores:
c_ranks[i]=c_scores[i]
c_ranks=np.argsort(c_ranks)[::-1]
super_data['c_ranks']=c_ranks.tolist()
In [7]:
from operator import itemgetter
partition = community.best_partition(G)
check=set()
for key in partition:
check.add(partition[key])
label_num=len(check)
group=[[] for i in range(label_num)]
for i in range(len(p_data)):
if(i in partition):
p_data[i]['louvain_index']=partition[i]
group[partition[i]].append(i)
else:
p_data[i]['louvain_index']=-1
# build group info
final_group_info=[]
for i in range(label_num):
final_group_info.append({})
final_group_info[i]['nodes']=group[i]
final_group_info[i]['size']=len(group[i])
final_group_info[i]['index']=i
#Top phrase for group
for i in range(label_num):
top_phrase=[]
count=np.zeros(len(index_phrase))
for j in group[i]:
for key in p_data[j]['phrases']:
count[int(key)]+=p_data[j]['phrases'][key]
b=np.argsort(count)[::-1]
for k in range(30):
top_phrase.append(index_phrase[str(b[k])])
final_group_info[i]['top_phrase']=top_phrase
name_str = (top_phrase[0]+', '+top_phrase[1]+' and '+top_phrase[2])
final_group_info[i]['name']=name_str
#build connection
for i in range(label_num):
connected_group=set()
nodes=final_group_info[i]['nodes']
for node in nodes:
for c in p_data[node]['all_cite']:
out_index=p_data[int(c)]['louvain_index']
if(out_index!=index):
connected_group.add(out_index)
final_group_info[i]['connected_group']=connected_group
# get importer, exporter and contribution score
for group in final_group_info:
index=group['index']
#map group to node
importer={}
exporter={}
#map group to number
import_score={}
export_score={}
exchange_score={}
for cg in group['connected_group']:
importer[cg]=set()
exporter[cg]=set()
import_score[cg]=0
export_score[cg]=0
exchange_score[cg]=0
for node in group['nodes']:
for c in p_data[node]['citations']:
out_index=p_data[int(c)]['louvain_index']
if(out_index!=index):
importer[out_index].add(node)
import_score[out_index]+=1
exchange_score[out_index]+=1
for c in p_data[node]['cited_by']:
out_index=p_data[int(c)]['louvain_index']
if(out_index!=index):
exporter[out_index].add(node)
export_score[out_index]+=1
exchange_score[out_index]+=1
for cg in group['connected_group']:
importer[cg]=list(importer[cg])
exporter[cg]=list(exporter[cg])
import_list=[]
for key in import_score:
import_list.append([key, import_score[key]])
export_list=[]
for key in export_score:
export_list.append([key, export_score[key]])
exchange_list=[]
for key in exchange_score:
exchange_list.append([key, exchange_score[key]])
group['import_list']=sorted(import_list,key=itemgetter(1),reverse=True)
group['export_list']=sorted(export_list,key=itemgetter(1),reverse=True)
group['exchange_list']=sorted(exchange_list,key=itemgetter(1),reverse=True)
group['importer']=importer
group['exporter']=exporter
# turn set into list for storage
for group in final_group_info:
group['nodes']=list(group['nodes'])
group['connected_group']=list(group['connected_group'])
super_data['louvain_group']=final_group_info
In [8]:
import os
path = "data/super_data_2.json"
if(os.path.isfile(path)):
os.remove(path)
with open(path, "w") as f:
json.dump(super_data, f)
In [ ]:
Content source: sudongqi/Propagation_Mergence
Similar notebooks: