This is a program for generating an optimized rocket design based on a feasible initial design, some fixed parameters, physical constants, and a theoretical rocket model. Mass is minimized while satisfying a number of mission constraints.
A feasible design is successfully produced and analyzed, as well as a model of it that is compatible with OpenRocket simulation. This provides a centralized source of authoritative information on the design of Portland State Aerospace Society's 4th launch vehicle design, LV4.
This program and its dependencies were adapted by Cory from the work done by Ondrej, Erin, Ian on the original MDO.
This depends on three other programs, located in the same directory. They should be consulted, as they contain many details which are omitted here.
The equations of motion for a rocket, a variable-mass system, are highly non-linear and there are no closed-form analytic solutions. Additionally, launch vehicles are complex systems with multiple interactions and mutual dependencies between their subsystems, each of which in and of itself constitute a difficult engineering project.
This presents a difficulty for clean-sheet design of a launch vehicle to fulfill specific mission requirements. A sufficiently detailed model and numerical integration techniques can be used to circumvent mathematical challenges to evaluating a given rocket design, at the cost of increased computational resources. A well-posed optimization problem and an algorithm implentating it can be used to capture constraints and requirements, to allow a feasible and locally optimal solution to emerge naturally from system dynamics.
There are other approaches to rocket design, such as trade space analysis or design of experiments. We have chosen our approach because it allows us to optimize the whole system all at once, hopefully cutting down on the time and effort that would be spent keeping the system consistent while developing only chunks of it at a time.
We may construct an objective function that represents the quality of our rocket as we see it. We selected Gross Liftoff Weight (GLOW) as our primary figure of merit, since it incorporates many aspects of a mission profile.
The Tsiolkovsky rocket equation $$\frac{\Delta v}{v_e} = \ln\left(\frac{m_0}{m_f}\right)$$ can be rearranged to isolate GLOW, $m_0$. The structure of the rocket and its subsystems determine its final dry mass, $m_f$; the effective exhaust velocity, $v_e$, captures engine performance; and the total change in velocity, $\delta v$, is determined by mission objectives, and can potentially include estimates of gravitational and atmospheric losses. So, minimizing GLOW while demanding a certain apogee is equivalent to simultaneously minimizing structural mass, maximizing engine performance, and balancing the conflicting goals of minimizing losses due to gravity and aerodynamics. Note that the sources of this conflict are the incentive to expel propellant rapidly to avoid the cost of carrying propellant in a gravitational field and the incentive to reduce velocity in lower atmosphere since drag is proportional to air pressure and the square of velocity.
In additional to this high-level objective, there are also several requirements and constraints that a given design must fulfill for a successful mission. These will be detailed in a following section. To represent them, we may use a series of barrier and penalty functions which bound potential rockets to a feasible region in the design space. A barrier function provides an absolute constraint that can not be violated under any circumstance, while a penalty function only disincentives designs if they stray far from a more permeable set of constraints.
By adding our three types of functions together, we obtain a merit function to minimize which encapsulates all of our relevant considerations as a positive real number. The input to our merit function is simply an array of values that are sufficient to determine our mathematical model of a rocket and its performance. However, each evaluation of the function entails that a simulation of the rocket's trajectory be computed, so the merit function itself does not admit to differentiation or finite differences.
There are several ways to approach this issue, two of which are covered in this program. If the design space is between 2 and 4 dimensions (inclusive), we may use the Nelder-Mead simplex method, a directed search that doesn't use derivatives. Note that a simplex is just a triangle generalized to n-dimensions. Performance of this geometry-based algorithm degrades as dimensionality increases, facilitating it becoming stuck in a local minimum. Another option is to use a genetic algorithm to seek a global optimum stochastically, which is much more computationally expensive but not restricted to low-dimensional spaces.
Theoretically, there are other methods possible, such as constructing and optimizing on a response surface model that is itself only a calculus-and-statistics-friendly model of our simulation model. We could also do Monte-Carlo simulation or some other stochastic optimization method, or even some sort of Latin Square trade-analysis. However, our results using both of the methods we selected closely agree and have been satisfactory.
Where we deviate significantly from the previous MDO is in our decision to do an iterative sequence of Nelder-Mead optimizations instead of just one. Before the penalty, and barrier functions are added to the objective function, they are weighed by user-selected parameters. Balancing these weights is an art; if they are too low, the optimizer can ignore or even abuse them, and if they are too high, the optimizer can neglect the objective and rapidly spiral into a somewhat artificial local minimum. Rather than picking arbitrary values that seem to give us the results we want, now we begin with very low weights, and for each successive optimization, we increase them.
This ought to enable more global coverage initially, while restricting the optimizer's freedom of movement in the design space after it has had a chance to find a satisfactory neighborhood. Since the weights become astronomically high, this also gives us a trivial method for determining whether we have articulated a well-posed problem and produced a strictly feasible design, or asked an unsatisfiable problem and received only the closest to being feasible design in a region of even worse alternatives.
Note that if a genetic algorithm is performed instead, an arbitrary weight must be selected intelligently, since a sequence of such optimization problems is exceedingly impractical.
We technically may not speak of "convergence", since it is possible for the Nelder-Mead method to stagnate on non-stationary points if the geometry of the simplex becomes ill-conditioned. However, since we are doing a sequence of optimizations, the simplex geometry is reset each time, so this is likely to not be an issue. The results have been in the same neighborhood as solutions given by a genetic algorithm, so our global coverage of the feasible region of the design space seems satisfactory.
The objective of the optimization is to minimize Gross Lift-Off Weight (GLOW). Overall project costs are likely to scale with GLOW.
We distilled the key design variables to just propellant mass, mass flow rate, and exit pressure. All other relevant variables are determined from those and our design constants and parameters. For the sake of brevity, we won't go into those details in the following discussion, but the interested reader can learn more from the supporting notebooks.
Our optimization routine currently includes 7 inequality constraints:
Mathematically the problem can be stated as estimating $\bar{x}^*$ to a specified degree of precision according to these conditions,
$$ \lim_{n \to \infty} \mu_{n} = 0$$$$ \lim_{n \to \infty} \rho_{n} = \infty$$$$f_{n}(\bar{x}) = m_{obj}(\bar{x}) + \mu_{n}\sum_i h_i(\bar{x}) + \rho_{n}\sum_j g_j(\bar{x})$$$$ \bar{x}_n = \min_{\bar{x}} f_{n}(\bar{x})$$$$ \lim_{n \to \infty} \bar{x}_n = \bar{x}^{*},$$\begin{eqnarray*} \mbox{where} \hspace{5 mm} \begin{split}\bar{x} = \left\{ \begin{array}{ll} & m_{prop}\\ & \dot{m}\\ & p_e \end{array} \right. \end{split} \hspace{2 mm} \mbox{, subject to} \hspace{2 mm} \begin{split} h_{barrier} = \left\{ \begin{array} & 108401 m < h < 151401 \hspace{1 mm} m \end{array} \right. \end{split} \hspace{2 mm} \mbox{, and subject to} \hspace{2 mm} \begin{split} g_{penalty} = \left\{ \begin{array}{ll} & F \leq 6 \hspace{1 mm} kN\\ & LS \geq 22 \hspace{1 mm} m/s\\ & \frac{a_{max}}{g_0} \leq 15 \hspace{1 mm} g's\\ & TWR \geq 2\\ & L/D \leq 21\\ & \frac{p_e}{p_a} \geq 0.35\\ \end{array} \right. \end{split} \end{eqnarray*}It is possible that for any given $n$ or in the limit, there is not a unique global minimum. However, a local minimum is attained which is very close to the results of a genetic algorithm.
By sending the limit of $\mu$ to 0, the barrier function is relaxed, which allows designs to approach the absolute window for acceptable apogees. By sending the limit of $\rho$ to $\infty$, the penalty function is strengthened, which prevents designs from leaving the feasible region.
In [1]:
# Nelder-Mead simplex search
%matplotlib inline
%run openrocket_interface.ipynb
%run System_Definition.ipynb
%run Simulation.ipynb
# i'm sorry for global vars...
global allvectors, dbz, allobjval
dbz = 0 # arithmetic error tracker (from crossing boundary constraints)
allvectors = [] # array for all design vecs, global variable
allobjfun = [] # array for tracking objective function evaluations
We chose to abstract all of the functions used within the merit function for increased flexibility, ease of reading, and later utility.
Technically logarithmic barrier functions allow negative penalties (i.e. rewards), but since we use upper and lower altitude barriers, it is impossible that their sum be less than 0. If the optimizer steps outside of the apogee window, the barrier functions can attempt undefined operations (specifically, taking the logarithm of a negative number), so some error handling is required to return an infinite value in those cases. Provided that the initial design is within the feasible region, the optimizer will not become disoriented by infinite values.
In [2]:
# all of our comparisons are ratios instead of subtractions because
# it's normalized, instead of dependent on magnitudes of variables and constraints
# minimize this, **2 makes it well behaved w.r.t. when var=cons
def objective(var, cons):
return (var/cons)**2 / 2
# **2 because i like it more than abs(), but that also works
def exact(var, cons):
return (var/cons - 1)**2 / 2
# this is your basic exterior penalty, either punishes for unfeasibility or is inactive
def exterior(var, cons, good_if_less_than=False):
if good_if_less_than:
return max(0, var/cons - 1)**2 / 2
else:
return max(0, -(var/cons - 1))**2 / 2
# this barrier function restricts our objective function to the strictly feasible region
# make rockets great again, build that wall, etc, watch out for undefined operations
def barrier(var, cons, int_point=False, good_if_less_than=True):
global dbz
try: # just in case we accidentally leave feasible region
if not int_point:
if good_if_less_than:
return -log(-(var/cons - 1))
else:
return -log(var/cons - 1)
elif int_point:
def interior(g): return 1/g # in case we don't like logarithms, which is a mistake
if good_if_less_than:
return -interior(var/cons - 1)
else:
return -interior(-(var/cons - 1))
except:
dbz += 1 # keep track of arithmetic errors, side effect
return float('inf') # ordinarily, this is bad practice since it could confuse the optimizer
# however, since this is a barrier function not an ordinary penalty, i think it's fine
Given a design vector $x$ and the iteration number $n$ our merit function f runs a trajectory simulation and evaluates the quality of that rocket. We keep track of each design and its merit value for later visualization, hence why global variables are used.
We run an iterative sequence of optimization routines for the win. We use the Euclidean distance in the design space between successive optimal designs to decide when it is no longer worth continuing, around the 6th decimal place.
In [3]:
# this manages all our constraints
# penalty parameters: mu -> 0 and rho -> infinity
def penalty(ls, F, LD, TWR, S_crit, alt, max_g, stbl_margin, ballast, mu, rho):
b = [barrier(alt, cons_alt, int_point=False, good_if_less_than=False),
barrier(alt, cons_ceiling, int_point=False, good_if_less_than=True),
#barrier(ballast, 0.0, int_point=False, good_if_less_than=False)
]
eq = []
ext = [exterior(F, cons_thrust, good_if_less_than=True),
exterior(ls, cons_ls, good_if_less_than=False),
exterior(LD, cons_LD, good_if_less_than=True),
exterior(TWR, cons_TWR, good_if_less_than=False),
exterior(S_crit, cons_S_crit, good_if_less_than=False),
exterior(max_g, cons_accel, good_if_less_than=True),
#exterior(stbl_margin, cons_stblty, good_if_less_than=False)
]
return mu*sum(b) + rho*(sum(eq) + sum(ext))
# Pseudo-objective merit function
# x is array of design parameters, n is index of penalty and barrier functions
# print blocks are sanity checks so i'm not staring at a blank screen and can see what various tweaks actually do
def f(x, n=8):
global allvectors, allobjfun
# get trajectory data
sim = trajectory(x[0], x[1], x[2],
throttle_window, min_throttle,
rcs_mdot, rcs_p_e, rcs_p_ch,
x0[3], x0[4], x0[5], x0[6], x0[7],
#ballast, root, tip, sweep, span,
airfrm_in_rad, OF, p_ch, T_ch, ke, MM,
dt, True)
obj_func = objective(sim.LV4.GLOW, cons_mass) # minimize GLOW
# then, calculate penalization from trajectory performance
pen_func = penalty(sim.launch_speed,
sim.thrust[0]/1000,
sim.LV4.length/sim.LV4.diameter,
sim.TWR,
sim.S_crit,
sim.alt[-1],
sim.max_g_force,
sim.min_stability,
sim.LV4.ballast,
mu_0 / (2**n), # initial mu and
rho_0 * (2**n)) # rho are selected for nice behavior
# add objective and penalty functions
merit_func = obj_func + pen_func
allvectors.append(x) # maintains a list of every design, side effect
allobjfun.append(log(merit_func)) # log plot is cleaner to look at, for later
return merit_func
# we want to iterate our optimizer for theoretical "convergence" reasons (given some assumptions)
# n = number of sequential iterations
def iterate(f, x_0, n):
x = x_0[:3] # initial design vector, stripping off throttling variables for now
global dbz
designs = []
for i in range(n):
print("Iteration " + str(i+1) + ":")
# this minimizer uses simplex method
res = minimize(f, x, args=(i), method='nelder-mead', options={'disp': True, 'adaptive':True, 'xatol': 0.001, 'fatol': 0.001})
x = res.x # feed optimal design vec into next iteration
designs.append(res.x) # we want to compare sequential objectives
# so we can stop when convergence criteria met
alt = trajectory(x[0], x[1], x[2],
throttle_window, min_throttle,
rcs_mdot, rcs_p_e, rcs_p_ch,
x0[3], x0[4], x0[5], x0[6], x0[7],
#ballast, root, tip, sweep, span,
airfrm_in_rad, OF, p_ch, T_ch, ke, MM,
dt).alt
print(" Arithmetic errors (from violations of acceptable altitude window): "+str(dbz))
print("Propellant mass (kg): "+str(x[0]))
#print("Airframe diameter (in): "+str(dia)) # in case we want diameter to be a design variable
print("Mass flow rate (kg/s): "+str(x[1]))
print("Exit pressure (Pa): "+str(x[2]))
print("Peak Altitude (km): "+str(alt[-1]/1000))
#print("Lower and Upper Drag Bounds (N): "+str(x[3])+', '+str(x[4]))
print('')
dbz=0 # I only care about divisions by zero in each individual iteration, side effect
if (i > 0) and (np.linalg.norm(designs[-1] - designs[-2]) < delta):
print("Early termination! The Euclidean distance between the last two designs was < " + str(delta))
break
return x
def breed_rockets(f):
res = differential_evolution(f, [(80, 200), (2.0, 4.5), (45000, 150000)],
strategy='best1bin', popsize=80, mutation=(.1, .8), recombination=.05,
updating='immediate', disp=True, atol=0.1, tol=0.1)
#polish=True,workers=-1, disp=True, atol=0.1, tol=0.1)
return res.x
In [4]:
# this creates some plots of the phase spaces of all our designs, doesn't save them
def phase_plot(m_prop, mdot_0, p_e):
fig, (ax1, ax2, ax3) = plt.subplots(3, sharex=True)
ax1.plot(m_prop, p_e)
ax1.set_title('Design Vectors')
ax1.yaxis.major.locator.set_params(nbins=6)
ax1.set_xlabel("Propellant (kg)")
ax1.set_ylabel("Exit pressure (kPa)")
ax2.plot(mdot_0, p_e)
ax2.yaxis.major.locator.set_params(nbins=6)
ax2.set_xlabel("Mass flow rate (kg/s)")
ax2.set_ylabel("Exit pressure (kPa)")
ax3.plot(m_prop, mdot_0)
ax3.yaxis.major.locator.set_params(nbins=6)
ax3.set_ylabel("Mass flow rate (kg/s)")
ax3.set_xlabel("Propellant (kg)")
# we display the first diagram of projected 2d phase portraits
plt.show()
fig2 = plt.figure()
ax = fig2.add_subplot(111, projection='3d')
ax.set_title('Design Space Trajectory')
ax.plot(m_prop, mdot_0, p_e)
ax.set_xlabel("Propellant (kg)")
ax.set_ylabel("Mass flow rate (kg/s)")
ax.set_zlabel("Exit pressure (kPa)")
# we display the interactive 3d phase portrait
plt.show()
# note, we're choosing to not automatically save these, but they can be saved from the interface
fig3 = plt.figure()
ax = fig3.add_subplot(111)
ax.set_title('Design Values as % of optimum design value')
ax.plot(m_prop/(m_prop[-1]), "r", label="m_prop")
ax.plot(mdot_0/(mdot_0[-1]), "g", label="mdot_0")
ax.plot(p_e/(p_e[-1]), "b", label="p_e")
ax.legend()
plt.show()
def design_grapher(): # no arguments because I'm using dirty dirty global variables
# get us some nice plots of the phase space of design vectors
designplot = [[],[],[]]
for i in range(0, len(allvectors)):
for j in range(0, len(designplot)):
designplot[j].append(allvectors[i][j])
phase_plot(designplot[0], designplot[1], designplot[2])
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_title('Objective + Constraint convergence')
ax.plot(allobjfun, "r", label="Objective Function Evaluations (Logarithmic Scale)")
ax.legend()
plt.show()
In [5]:
# Results, this is the big boi function
if __name__ == '__main__':
test = trajectory(x0[0], x0[1], x0[2],
throttle_window, min_throttle,
rcs_mdot, rcs_p_e, rcs_p_ch,
x0[3], x0[4], x0[5], x0[6], x0[7],
#ballast, root, tip, sweep, span,
airfrm_in_rad, OF, p_ch, T_ch, ke, MM,
dt)
if (test.alt[-1] < cons_alt) or (test.alt[-1] > cons_ceiling): # rudimentary error handling to save heartache
raise Exception('Rocket apogee out of bounds! Apogee {:.3f} km'.format(test.alt[-1]/1000))
# feed initial design into iterative optimizer, get most (locally) feasible design
res = iterate(f, x0, iterations)
# probe design space, darwin style. if design space has more than 3 dimensions, you need this. takes forever.
#res = breed_rockets(f)
print("Optimization done!")
# Rename the optimized output for convenience
m_prop = res[0]
mdot = res[1]
p_e = res[2]
# get trajectory info from optimal design
sim = trajectory(m_prop, mdot, p_e,
throttle_window, min_throttle,
rcs_mdot, rcs_p_e, rcs_p_ch,
x0[3], x0[4], x0[5], x0[6], x0[7],
#ballast, root, tip, sweep, span,
airfrm_in_rad, OF, p_ch, T_ch, ke, MM,
0.05)
textlist = print_results(sim, True)
# draw pretty pictures of optimized trajectory
rocket_plot(sim.t, sim.alt, sim.v, sim.a, sim.thrust,
sim.dyn_press, sim.Ma, sim.m, sim.p_a, sim.drag, sim.throttle, True)
# get/print info about our trajectory and rocket
for line in textlist:
print(line)
print('\nMaking an OpenRocket rocket and corresponding engine!')
# create an openrocket file with matching engine for our design (and print/save trajectory data)
make_engine(mdot, sim.m_prop, sim.thrust[0:sim.F_index + 1],
sim.LV4.inr_r, airframe_thickness, sim.LV4.l_o, sim.LV4.l_f, sim.LV4.m_tank_o, sim.LV4.m_tank_f,
sim.t[sim.F_index], sim.LV4.engine.Ve/g_n,
sim.LV4.eng_sys_dry_mass, sim.LV4.eng_sys_len, sim.openrocket_CoM,
sim.LV4.ballast, sim.LV4.fin)
# draw more pretty pictures, but of the optimizer guts
design_grapher()
In [6]:
'''m_prop = 141.88397561701234
mdot = 2.8728568968933215
p_e = 71035.22181877575
# get trajectory info from optimal design
sim = trajectory(m_prop, mdot, p_e,
throttle_window, min_throttle,
rcs_mdot, rcs_p_e, rcs_p_ch,
root, tip, sweep, span,
airfrm_in_rad, OF, p_ch, T_ch, ke, MM,
0.05)
sim.LV4.read_out()
# get/print info about our trajectory and rocket
#res_text = print_results(res)
#for line in res_text:
# print(line)
res_text = ""
frame_thickness = 0.008255
inr_radius = sim.LV4.inr_r
print('Engine system details in trajectory log!')
print('\nMaking an OpenRocket rocket and corresponding engine!')
# create an openrocket file with matching engine for our design (and print/save trajectory data)
make_engine(mdot, sim.m_prop, sim.F[0:sim.F_index + 1],
inr_radius, frame_thickness, sim.LV4.l_o, sim.LV4.l_f, sim.LV4.m_tank_o, sim.LV4.m_tank_f,
sim.t[sim.F_index], sim.LV4.engine.Ve/g_n, res_text,
sim.LV4.eng_sys_dry_mass, sim.LV4.eng_sys_len, sim.openrocket_CoM)
# draw pretty pictures of optimized trajectory
#rocket_plot(sim.t, sim.alt, sim.v, sim.a, sim.F, sim.q, sim.Ma, sim.m, sim.p_a, sim.D, sim.throttle)'''
Out[6]:
Above, we can see the details of each iteration of the optimization sequence.
Then, we have detailed information from the trajectory of our optimal rocket design. Note that there are some low-level details suppressed here, but contained in the saved text file.
There is a plot of that trajectory and some of the state variables we care about, which can be enlightening.
Then there are three projections of the optimization path in the design space, as well as its 3D representation. This informs a sense of how an initial design is improved by the optimizer, and how design variables relate to each other.
The next visualization is of our design variables normalized as a percentage of their optimal values. This reduces the entire optimization into one easily comprehendable image. Note that spikes occur where iterations begin, since the Nelder-Mead method is reinitialized and must relearn how far it can move in any direction.
Finally, there is a semi-log plot of the merit values of our sequence of designs. The spikes are again from the beginning of iterations. If our problem is well-posed, punishment for deviation from feasibility will exponentially increase with each iteration and the minimum of each iteration will be decreasing. An ill-conditioned problem will have an increasing minimum with each iteration, since there would be no feasible design and the punishments would be growing. In either case, gaps appear in the plot where a design failed to satisfy the apogee window.
Prima facie, this program produces viable rocket designs. The next step is to experiment with in OpenRocket with the model produced. It is critical that we make sure the stability requirement set by Base-11 is met (calibers > 2 throughout flight), and it is also an opportunity to experiment with different masses and fins, as well as environmental conditions such as wind or launching at 85 degrees.
Our sequence of merit evaluations is bounded below by 0 but is not bounded above, since some designs have infinite values. Provided that the problem is well-conditioned, the subsequence of minimums from each iteration is monotonically decreasing. So, it seems reasonable to infer that an ideal optimal rocket design exists for a well-posed problem. This program approximates that ideal design, but whether it is a good approximation might be a qualitative judgement.
There are other design and optimization methods, and different choices that could have been made. This code is fairly flexible so it isn't too difficult to change assumptions on the fly. The goal is not to provide a definitive final design of a rocket; rather it is to provide a design tool which will be useful throughout the design, manufacturing, and testing processes of LV4, up to and including the day it launches.
It is important to remember that the difference between design parameters and variables is largely stipulative. With slight effort, some parameters could be freed for optimization and some variables could be frozen. The chamber pressure has been stipulated and determines the thermodynamic properties that made us select the oxygen/fuel ratio we stipulate. There is another script in this repository that can be used to perform a trade-space analysis of chamber pressure and propellant configurations, if we decide to revisit those preliminary assumptions.
We have been conservative in our mass estimates in order to perform a more robust optimization. We could also take the maximum merit evaluation in the neighborborhood of a given design, as bounded by engineering tolerances, instead of simply evaluating a single design. That would obviously be computationally costly, but each neighborhood could be evaluated by parallel distributed processes, and the benefit of implementing a minmax problem instead of a minimization problem could be worth it, considering the inevitable inefficiencies that may arise in a physical system.
It would be a good idea to do some statistical analysis of the design space to produce a heat map as well, so we could characterize the topology of our design space. It's possible that there may be multiple optimal neighborhoods, and this would answer that question. Note that a sensitivity analysis of the current optimization program could also answer that question, by probing the design space with different initial designs.