Constraints in Rocketseld MOO problem

Hi,

We’re just getting started with Rocksled on DoD DSRC HPC. We have FireWorks running and both basic and complex RocketSled sample problems run. So far, so good.

Next step is to code up a few classic problems from wikipedia - https://en.wikipedia.org/wiki/Test_functions_for_optimization learning as we go and moving gradually towards actual motivating problems. My question is given a MOO problem what is the recommended way to add input constraints more complex than simple rectangular boundaries?

A simple way this might be achieved without a lot of heavy lifting would be in the objective task we could add a meetsConstraints() function prior to expensive function evaluation that returned a large penalty (distance from feasibility boundary?) - the underlying RL algorithm would learn to avoid out-of-constraint areas. Not sure if this would work.

Is there an example RocketSled problem with input constraints - both continuous and discrete would be nice. We have a lot of previous experience with non-RL MOO approaches which ususally involve taking a simple problem as described by a SME and twisting it up like a pretzel to try to get a convex problem description to work in SCIP… The idea of a RL-based optimizer that can solve arbitrarily posed mixed problems is incredibly interesting

thanks

Andrew Strelzoff

Hi Andrew,

If I am understanding your question correctly, you are interested in searching discontinuous spaces with rocketsled. If this is correct, you should not need any meetsConstraint function nor transformations into pretzel-space. Rocketsled has tools for handling these kinds of spaces.

  1. The input constraints to all problems can be specified with the “dimensions” and (if needed) the “space_file” arguments to MissionControl.configure, regardless of MOO or single objective. The syntax for the dimensions argument is meant to be straightforward (can can be viewed in the comprehensive guide). It is a list of constraints in each dimension. Each constraint is either a tuple or list. Tuples of ints and floats indicate ranges (inclusive rectangular bounds). Lists of tuples, ints, or strings represent discrete, discontinuous points.

  2. If you have a list of selected points in each dimension (i.e., x1 is only valid for values of [0.14, 0.13, 0.18, 0.99, 1000.0], x2 is only valid for [7.0, 19.5], etc.), rocketsled will handle this by generating the search space combinatorially. You can view some examples in the Doc’s Comprehensive Guide. One is reproduced below:

dimensions=[[1.332, 4.565, 19.030], [221.222, 221.0283, 221.099]]

This reductive example has 2 dimensions. Dim1 valid for [1.332, 4.565, 19.030] and Dim2 valid for [221.222, 221.0283, 221.099]. The entire search space is then 9 points.

2b. This isn’t in the docs, but you can also define each dimension individually using any of the allowed syntax. For example, the dimensions syntax

dimensions=[(1.0, 100.0), [221.222, 221.0283, 221.099], ["red", "blue", "green"]]

Defines the search space as x1 = all floats between 1 and 100, x2 = only 3 floats, and x3 = only 3 categories. The search space is then combinatorially generated - ranged dimensions are sampled uniformly.

  1. If you are interested in search spaces which are **not **combinatorially generated - i.e., only a few points in the space are actually valid - use the “space_file” argument (also detailed in the docs). This allows you to specify a python pickle file which rocketsled will read and use to define the space. This file is essentially a big list of allowed points which you can generate however you want. Thus you can define the space in whatever manner you desire without the limitations of the dimensions syntax.

For example, if your 2D space could be rectangularly outlined by x1 as floats between 0.1 and 0.9 and x2 as integers between 1 and 100, you could define your constrained search space to be only the points [0.75, 5], [0.1, 9] and [0.85, 99], and no other points. Just put those points in a list and save it as a pickle file, then tell rocketsled where to find it with space_file.

Note specifying space_file requires you to still define dimensions. These dimensions are used for some type-checking operations, not the actual search though.

If you have any problems using any of the above methods (outside of these issues), please let me know!

Also note that while using the builtin optimizers is the easiest method of optimization, they are not ideal for every (or any!) problem. They are general purpose by design. Thus you can specify the latest and greatest custom optimizer with the ‘predictor’ argument and use that instead. All the above arguments (1,2,3) should still work as long as your custom predictor can utilize the inputs rocketsled gives it (namely, a list of explored points, corresponding obj function outputs, and unexplored points). If your custom optimizer requires rectangular boundaries though, looks like you’ll need meetsConstraint or pretzel-space :confused:

···

On Friday, May 10, 2019 at 12:18:51 PM UTC-7, Dr. Andrew Strelzoff (USACE-DSRC) wrote:

Hi,

We’re just getting started with Rocksled on DoD DSRC HPC. We have FireWorks running and both basic and complex RocketSled sample problems run. So far, so good.

Next step is to code up a few classic problems from wikipedia - https://en.wikipedia.org/wiki/Test_functions_for_optimization learning as we go and moving gradually towards actual motivating problems. My question is given a MOO problem what is the recommended way to add input constraints more complex than simple rectangular boundaries?

A simple way this might be achieved without a lot of heavy lifting would be in the objective task we could add a meetsConstraints() function prior to expensive function evaluation that returned a large penalty (distance from feasibility boundary?) - the underlying RL algorithm would learn to avoid out-of-constraint areas. Not sure if this would work.

Is there an example RocketSled problem with input constraints - both continuous and discrete would be nice. We have a lot of previous experience with non-RL MOO approaches which ususally involve taking a simple problem as described by a SME and twisting it up like a pretzel to try to get a convex problem description to work in SCIP… The idea of a RL-based optimizer that can solve arbitrarily posed mixed problems is incredibly interesting

thanks

Andrew Strelzoff

Hi,

So, as I understand your answer, RocketSled can handle either combinatorial (rectangular-bound continuous variables and lists of discrete options) or a pre-calculated list of points which respect some boundaries.

Unfortunately there are a lot of problems with more complex continuous sample spaces - a simple example would be “triangular” with like x >y - more realistic would be aircraft stability which are a series of front-back and side-to-side balance equations at various operating speeds, maneuvers and outside air conditions. In addition to complex constraints that will make most random points infeasible there are numerous discrete design choices most of which will limit other choices. So if we pick engine A we cannot have fuel tank X because of space constraints. If we pick Fuel Tank Y then we cannot pick landing gear M because it will be too heavy and so on. So we have a space of quadrillions of possible designs which each take about a week on a 100 nodes to evaluate.

Current state of the art is (seriously) to assemble a panel of experts who take the most recent moderately successful design and tweak it a little this way and that to meet new requirements or upgrade a few components. This mostly works but produces few actually new designs or innovations.

So, why are we looking at Rocketsled? We have funding to see if Reinforcement Learning can be used to automatically explore high dimension, expensive to evaluate, heavily constrained tradespace. Rocketsled seems to be an attractive choice to work with or extend because it was intended for HPC, the samples and papers published are actually not that far from our interests.

So, I may go ahead and see if I can place a penalty on out-of-constrain choices and see if one of the built in algorithms can “learn” the constraints as well as find a good solution for some of the test problems from the literature.

thanks

···

On Friday, May 10, 2019 at 2:18:51 PM UTC-5, Dr. Andrew Strelzoff (USACE-DSRC) wrote:

Hi,

We’re just getting started with Rocksled on DoD DSRC HPC. We have FireWorks running and both basic and complex RocketSled sample problems run. So far, so good.

Next step is to code up a few classic problems from wikipedia - https://en.wikipedia.org/wiki/Test_functions_for_optimization learning as we go and moving gradually towards actual motivating problems. My question is given a MOO problem what is the recommended way to add input constraints more complex than simple rectangular boundaries?

A simple way this might be achieved without a lot of heavy lifting would be in the objective task we could add a meetsConstraints() function prior to expensive function evaluation that returned a large penalty (distance from feasibility boundary?) - the underlying RL algorithm would learn to avoid out-of-constraint areas. Not sure if this would work.

Is there an example RocketSled problem with input constraints - both continuous and discrete would be nice. We have a lot of previous experience with non-RL MOO approaches which ususally involve taking a simple problem as described by a SME and twisting it up like a pretzel to try to get a convex problem description to work in SCIP… The idea of a RL-based optimizer that can solve arbitrarily posed mixed problems is incredibly interesting

thanks

Andrew Strelzoff

Hi Andrew,

Yes, you are correct about Rocketsled’s current capabilities for defining constraints. The trouble then seems that your valid search space is defined by complex algebraic equations? Are these equations solvable a-priori, or are they necessarily part of the objective function (workflow) also?

Rocketsled at this time has no algebraic constraint-based system for defining search spaces. To my knowledge, few black box optimization packages have such capability. I think those kinds of constraints are more common for problems where the objective function has a closed form or is convex.

If possible, I would try to avoid mixing learning the constraint and the actual objective function, especially if you are considering penalizing an existing objective. A black-box optimization algorithm may be able to learn the constraints (i.e., via a meetsConstraint check or similar), but if the constraints are complex and hard to optimize, the algorithm will likely take a long(er) time to find solutions. In other words, the algorithm may be able to optimize the simulation alone, but optimizing the simulation + constraints might become inefficient if the real constraints are only defined implicitly.

One way to get around “mixing” constraints with your objective would be to define some objectives which are only for determining if a point is meeting the constraints. So if you have three minimization objectives for a simulation, say

[obj1, obj2, obj3],

You could make your objectives

[obj1, obj2, obj3, constraintPenalty1, constraintPenalty2, …].

The built in algorithms treat each objective completely independently, so there would be no mixing of the complex constraints with the objective function itself. A potential problem with this is that solutions are only considered solutions if they are pareto-optimal. So you would probably get some “solution” points which don’t meet the constraints but score well on other objectives…

Hi all,

Would it help if rocketsled allowed the user to define a function that randomly picks a valid point in input space (instead of explicitly defining ranges for the various inputs)?

e.g.

  1. Instead of defining a search space, the user provides a function that can randomly give you a valid point in input space

  2. Rocketsled picks, say, 1000 random points using the provided function and runs the one that is the “best” according to the current surrogate model and selection strategy. Since the search space has continuous dimensions we don’t have to worry about duplication

···

On Mon, May 13, 2019 at 1:29 PM Alexander Dunn [email protected] wrote:

Hi Andrew,

Yes, you are correct about Rocketsled’s current capabilities for defining constraints. The trouble then seems that your valid search space is defined by complex algebraic equations? Are these equations solvable a-priori, or are they necessarily part of the objective function (workflow) also?

Rocketsled at this time has no algebraic constraint-based system for defining search spaces. To my knowledge, few black box optimization packages have such capability. I think those kinds of constraints are more common for problems where the objective function has a closed form or is convex.

If possible, I would try to avoid mixing learning the constraint and the actual objective function, especially if you are considering penalizing an existing objective. A black-box optimization algorithm may be able to learn the constraints (i.e., via a meetsConstraint check or similar), but if the constraints are complex and hard to optimize, the algorithm will likely take a long(er) time to find solutions. In other words, the algorithm may be able to optimize the simulation alone, but optimizing the simulation + constraints might become inefficient if the real constraints are only defined implicitly.

One way to get around “mixing” constraints with your objective would be to define some objectives which are only for determining if a point is meeting the constraints. So if you have three minimization objectives for a simulation, say

[obj1, obj2, obj3],

You could make your objectives

[obj1, obj2, obj3, constraintPenalty1, constraintPenalty2, …].

The built in algorithms treat each objective completely independently, so there would be no mixing of the complex constraints with the objective function itself. A potential problem with this is that solutions are only considered solutions if they are pareto-optimal. So you would probably get some “solution” points which don’t meet the constraints but score well on other objectives…

You received this message because you are subscribed to the Google Groups “fireworkflows” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

Visit this group at https://groups.google.com/group/fireworkflows.

To view this discussion on the web visit https://groups.google.com/d/msgid/fireworkflows/621b8f5a-1da1-4d47-8c5e-7e84b9b68249%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.


Best,
Anubhav

Hey Anubhav,

That certainly seems like a good solution to me, and at first glance wouldn’t be too troublesome to implement in rocketsled.

@Andrew, would this mitigate your problems with constraints?

Thanks,

Alex

···

On Monday, May 13, 2019 at 1:35:22 PM UTC-7, ajain wrote:

Hi all,

Would it help if rocketsled allowed the user to define a function that randomly picks a valid point in input space (instead of explicitly defining ranges for the various inputs)?

e.g.

  1. Instead of defining a search space, the user provides a function that can randomly give you a valid point in input space
  1. Rocketsled picks, say, 1000 random points using the provided function and runs the one that is the “best” according to the current surrogate model and selection strategy. Since the search space has continuous dimensions we don’t have to worry about duplication

On Mon, May 13, 2019 at 1:29 PM Alexander Dunn [email protected] wrote:

Hi Andrew,

Yes, you are correct about Rocketsled’s current capabilities for defining constraints. The trouble then seems that your valid search space is defined by complex algebraic equations? Are these equations solvable a-priori, or are they necessarily part of the objective function (workflow) also?

Rocketsled at this time has no algebraic constraint-based system for defining search spaces. To my knowledge, few black box optimization packages have such capability. I think those kinds of constraints are more common for problems where the objective function has a closed form or is convex.

If possible, I would try to avoid mixing learning the constraint and the actual objective function, especially if you are considering penalizing an existing objective. A black-box optimization algorithm may be able to learn the constraints (i.e., via a meetsConstraint check or similar), but if the constraints are complex and hard to optimize, the algorithm will likely take a long(er) time to find solutions. In other words, the algorithm may be able to optimize the simulation alone, but optimizing the simulation + constraints might become inefficient if the real constraints are only defined implicitly.

One way to get around “mixing” constraints with your objective would be to define some objectives which are only for determining if a point is meeting the constraints. So if you have three minimization objectives for a simulation, say

[obj1, obj2, obj3],

You could make your objectives

[obj1, obj2, obj3, constraintPenalty1, constraintPenalty2, …].

The built in algorithms treat each objective completely independently, so there would be no mixing of the complex constraints with the objective function itself. A potential problem with this is that solutions are only considered solutions if they are pareto-optimal. So you would probably get some “solution” points which don’t meet the constraints but score well on other objectives…

You received this message because you are subscribed to the Google Groups “fireworkflows” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

Visit this group at https://groups.google.com/group/fireworkflows.

To view this discussion on the web visit https://groups.google.com/d/msgid/fireworkflows/621b8f5a-1da1-4d47-8c5e-7e84b9b68249%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.


Best,
Anubhav