mpolson64 Github contribution chart
mpolson64 Github Stats
mpolson64 Most Used Languages

Activity

02 Dec 2022

Pull Request

Mpolson64

Report all MetricFetchEs (not just ones that cause trial to Fail)

Created On 02 Dec 2022 at 04:45:37
Create Branch
Mpolson64 In mpolson64/Ax Create Branchexport-D41687682

Mpolson64

Adaptive Experimentation Platform

On 02 Dec 2022 at 04:45:35

Mpolson64

Kill restore_model_from_generator_run as it's no longer needed! (#1300)

Summary: Pull Request resolved: https://github.com/facebook/Ax/pull/1300

NOTE: Copied from D32605396 (too stale to rebase)

Now that we refactored generation strategy to use generation nodes, we no longer need to refit the model on generation strategy re-loading. Instead, we can just fit it next time generation_strategy.gen is called, which is great because re-fitting the model really does not belong in the deserialization call anyway –– it's kind of a big side effect and makes the deserialization heavy and slow...

Differential Revision: D41687258

fbshipit-source-id: ca6490cca1c3ab08ab1ab28cf3c3d686f64af44f

Pushed On 02 Dec 2022 at 04:40:59
Create Branch
Mpolson64 In mpolson64/Ax Create Branchexport-D41687258

Mpolson64

Adaptive Experimentation Platform

On 02 Dec 2022 at 04:17:52
Pull Request

Mpolson64

Kill `restore_model_from_generator_run` as it's no longer needed!

Created On 02 Dec 2022 at 04:17:51
Create Branch
Mpolson64 In mpolson64/Ax Create Branchexport-D41444658

Mpolson64

Adaptive Experimentation Platform

On 30 Nov 2022 at 03:42:35
Pull Request

Mpolson64

Improve Logging for MetricFetchE

Created On 30 Nov 2022 at 03:42:34
Pull Request

Mpolson64

Fix tutorials after revert to change in Trial.fetch_data

Created On 29 Nov 2022 at 06:29:29
Create Branch
Mpolson64 In mpolson64/Ax Create Branchexport-D41584429

Mpolson64

Adaptive Experimentation Platform

On 29 Nov 2022 at 06:29:28
Pull Request

Mpolson64

Add comment to MetricFetchE

Created On 29 Nov 2022 at 01:59:23
Create Branch
Mpolson64 In mpolson64/Ax Create Branchexport-D41561375

Mpolson64

Adaptive Experimentation Platform

On 29 Nov 2022 at 01:59:23
Pull Request

Mpolson64

Improve traceback visibility in Metric._unwrap.*

Created On 21 Nov 2022 at 06:32:26
Create Branch
Mpolson64 In mpolson64/Ax Create Branchexport-D41438869

Mpolson64

Adaptive Experimentation Platform

On 21 Nov 2022 at 06:32:25

Mpolson64

started

Started On 21 Nov 2022 at 02:30:50

Mpolson64

Extend Posterior API to support torch distributions & overhaul MCSampler API (#1254)

Summary: Pull Request resolved: https://github.com/facebook/Ax/pull/1254

X-link: https://github.com/facebookresearch/aepsych/pull/193

X-link: https://github.com/pytorch/botorch/pull/1486

The main goal here is to broadly support non-Gaussian posteriors.

  • Adds a generic TorchPosterior which wraps a Torch Distribution. This defines a few properties that we commonly expect, and calls the distribution for the rest.
  • For a unified plotting API, this shifts away from mean & variance to a quantile function. Most torch distributions implement inverse CDF, which is used as quantile. For others, the user should implement it either at distribution or posterior level.
  • Hands off the burden of base sample handling from the posterior to the samplers. Using a dispatcher based get_sampler method, we can support SAA with mixed posteriors without having to shuffle base samples in a PosteriorList, as long as all base distributions have a corresponding sampler and support base samples.
  • Adds ListSampler for sampling from PosteriorList.
  • Adds ForkedRNGSampler and StochasticSampler for sampling from posteriors without base samples.
  • Adds rsample_from_base_samples for sampling with base_samples / with a sampler.
  • Absorbs FullyBayesianPosteriorList into PosteriorList.
  • For MC acqfs, introduces a get_posterior_samples for sampling from the posterior with base samples / a sampler. If a sampler was not specified, this constructs the appropriate sampler for the posterior using get_sampler, eliminating the need to construct a sampler in __init__, which we used to do under the assumption of Gaussian posteriors.

TODOs:

  • Relax the Gaussian assumption in acquisition functions & utilities. Some of this might be addressed in a follow-up diff.
  • Updates to website / docs & tutorials to clear up some of the Gaussian assumption, introduce the new relaxed API. Likely a follow-up diff.

Other notables:

  • See D39760855 for usage of TorchDistribution in SkewGP.
  • TransformedPosterior could serve as the fallback option for derived posteriors.
  • MC samplers no longer support resample or collapse_batch_dims(=False). These can be handled by i) not using base samples, ii) just using torch.fork_rng and sampling without base samples from that. Samplers are only meant to support SAA. Introduces ForkedRNGSampler and StochasticSampler as convenience samplers for these use cases.
  • Introduced batch_range_override for the sampler to support edge cases where we may want to override posterior.batch_range (needed in qMultiStepLookahead)
  • Removes unused sampling utilities construct_base_samples(_from_posterior), which assume Gaussian posterior.
  • Moves the main logic of _set_sampler method of CachedCholesky subclasses to a _update_base_samples method on samplers, and simplifies these classes a bit more.

Reviewed By: Balandat

Differential Revision: D39759489

fbshipit-source-id: f4db866320bab9a5455dfc0c2f7fe2cc15385453

Pushed On 18 Nov 2022 at 10:49:49

Mpolson64

Break off Trial.fetch_data_results from Trial.fetch_data (#1265)

Summary: Pull Request resolved: https://github.com/facebook/Ax/pull/1265

Break apart fetching into two distinct methods, the former getting the results as returned by the metrics and the latter (unsafely) unwrapping them into a Data object, raising errors.

Also added comments along the unwrap methods letting users know that if their experiment does not properly configure default_data_type then fetching may be lossy (this was always the case, even before the results refactor, but bears repeating now).

Reviewed By: lena-kashtelyan

Differential Revision: D41348611

fbshipit-source-id: 928b8243e8732f7286b7ba712f0586ea9e2b7b86

Pushed On 18 Nov 2022 at 10:49:49

Mpolson64

Fix Result wrapping in mixed MapMetric/Metric experiments (#1259)

Summary: Pull Request resolved: https://github.com/facebook/Ax/pull/1259

While rare, it was possible for an experiment to get into a weird state in which it woudl crash during metric fetching. The following sequence of events would have to happen:

  1. The experiment fetches MapData from some MapMetric and caches it
  2. A nonmap Metric's data is requested and data is available
  3. Metric.lookup_or_fetch_experiment_data_multi is called and encounters the cached MapData
  4. The Metric tries to wrap the cached MapData into a Data (since Data is the metric's default_data_type).

The solution is to always use MapMetric._wrap_trial_data when the experiment's default data constructor is MapData

Reviewed By: bernardbeckerman, lena-kashtelyan

Differential Revision: D41279558

fbshipit-source-id: e7788ea8574146508f5186876150bd9c5814c385

Pushed On 17 Nov 2022 at 10:15:20

Mpolson64

Break off Trial.fetch_data_results from Trial.fetch_data (#1265)

Summary: Pull Request resolved: https://github.com/facebook/Ax/pull/1265

Break apart fetching into two distinct methods, the former getting the results as returned by the metrics and the latter (unsafely) unwrapping them into a Data object, raising errors.

Also added comments along the unwrap methods letting users know that if their experiment does not properly configure default_data_type then fetching may be lossy (this was always the case, even before the results refactor, but bears repeating now).

Reviewed By: lena-kashtelyan

Differential Revision: D41348611

fbshipit-source-id: fcf3f7cbcf76603c9823901346a422ecf4062010

Pushed On 17 Nov 2022 at 10:15:20

Mpolson64

incorporate MAX_INITIALIZATION_TRIALS override into choose_generation_strategy (#1263)

Summary: Pull Request resolved: https://github.com/facebook/Ax/pull/1263

bug reported here, introduced in D40196130. I overlooked that num_initialization_trials is also important for determining min_trials_observed for the initialization step, and when changing one but not the other in choose_gs_wrapper, we can end up in a state where the sweep is expecting to see more trials in a step than that step's total number of trials. Incorporating the trial cap into choose_generation_strategy mitigates this issue.

Reviewed By: mpolson64

Differential Revision: D41339501

fbshipit-source-id: ce2d5167397f47e2a348dc1f7c4c19964b848933

Pushed On 17 Nov 2022 at 06:10:48

Mpolson64

Fix Result wrapping in mixed MapMetric/Metric experiments

Summary: While rare, it was possible for an experiment to get into a weird state in which it woudl crash during metric fetching. The following sequence of events would have to happen:

  1. The experiment fetches MapData from some MapMetric and caches it
  2. A nonmap Metric's data is requested and data is available
  3. Metric.lookup_or_fetch_experiment_data_multi is called and encounters the cached MapData
  4. The Metric tries to wrap the cached MapData into a Data (since Data is the metric's default_data_type).

The solution is to always use MapMetric._wrap_trial_data when the experiment's default data constructor is MapData

Differential Revision: D41279558

fbshipit-source-id: f202591c49bd90d38333fa86e345601a38c6b0e2

Pushed On 17 Nov 2022 at 06:10:48

Mpolson64

Break off Trial.fetch_data_results from Trial.fetch_data (#1265)

Summary: Pull Request resolved: https://github.com/facebook/Ax/pull/1265

Break apart fetching into two distinct methods, the former getting the results as returned by the metrics and the latter (unsafely) unwrapping them into a Data object, raising errors.

Also added comments along the unwrap methods letting users know that if their experiment does not properly configure default_data_type then fetching may be lossy (this was always the case, even before the results refactor, but bears repeating now).

Differential Revision: D41348611

fbshipit-source-id: b08960324defffc435d4a91a4624ba72bae9ab4c

Pushed On 17 Nov 2022 at 06:10:48

Mpolson64

Fix Result wrapping in mixed MapMetric/Metric experiments

Summary: While rare, it was possible for an experiment to get into a weird state in which it woudl crash during metric fetching. The following sequence of events would have to happen:

  1. The experiment fetches MapData from some MapMetric and caches it
  2. A nonmap Metric's data is requested and data is available
  3. Metric.lookup_or_fetch_experiment_data_multi is called and encounters the cached MapData
  4. The Metric tries to wrap the cached MapData into a Data (since Data is the metric's default_data_type).

The solution is to always use MapMetric._wrap_trial_data when the experiment's default data constructor is MapData

Differential Revision: D41279558

fbshipit-source-id: aa9932294e5bf48ae69dfad8c1949556dfefedc6

Pushed On 16 Nov 2022 at 10:09:04

Mpolson64

Break off Trial.fetch_data_results from Trial.fetch_data (#1265)

Summary: Pull Request resolved: https://github.com/facebook/Ax/pull/1265

Break apart fetching into two distinct methods, the former getting the results as returned by the metrics and the latter (unsafely) unwrapping them into a Data object, raising errors.

Also added comments along the unwrap methods letting users know that if their experiment does not properly configure default_data_type then fetching may be lossy (this was always the case, even before the results refactor, but bears repeating now).

Differential Revision: D41348611

fbshipit-source-id: afb87a18e64539d88526f44a9f6f1351ead4cd7d

Pushed On 16 Nov 2022 at 10:09:04
Pull Request

Mpolson64

Break off Trial.fetch_data_results from Trial.fetch_data

Created On 16 Nov 2022 at 08:07:40
Create Branch
Mpolson64 In mpolson64/Ax Create Branchexport-D41348611

Mpolson64

Adaptive Experimentation Platform

On 16 Nov 2022 at 08:07:39

Mpolson64

Fix Result wrapping in mixed MapMetric/Metric experiments (#1259)

Summary: Pull Request resolved: https://github.com/facebook/Ax/pull/1259

While rare, it was possible for an experiment to get into a weird state in which it woudl crash during metric fetching. The following sequence of events would have to happen:

  1. The experiment fetches MapData from some MapMetric and caches it
  2. A nonmap Metric's data is requested and data is available
  3. Metric.lookup_or_fetch_experiment_data_multi is called and encounters the cached MapData
  4. The Metric tries to wrap the cached MapData into a Data (since Data is the metric's default_data_type).

The solution is to always use MapMetric._wrap_trial_data when the experiment's default data constructor is MapData

Differential Revision: D41279558

fbshipit-source-id: 53e3d6731e86a30e9254596ea33ac9bd54586e76

Pushed On 15 Nov 2022 at 02:06:39

Mpolson64

remove check that objective_thresholds and objectives are the same length

Summary:

Created from CodeHub with https://fburl.com/edit-in-codehub

Reviewed By: saitcakmak

Differential Revision: D41037113

fbshipit-source-id: 4d3bc982d685ff33a1d0787ae94935897e28d3a0

Pushed On 15 Nov 2022 at 01:07:51

Mpolson64

Fix Result wrapping in mixed MapMetric/Metric experiments (#1259)

Summary: Pull Request resolved: https://github.com/facebook/Ax/pull/1259

While rare, it was possible for an experiment to get into a weird state in which it woudl crash during metric fetching. The following sequence of events would have to happen:

  1. The experiment fetches MapData from some MapMetric and caches it
  2. A nonmap Metric's data is requested and data is available
  3. Metric.lookup_or_fetch_experiment_data_multi is called and encounters the cached MapData
  4. The Metric tries to wrap the cached MapData into a Data (since Data is the metric's default_data_type).

The solution is to always use MapMetric._wrap_trial_data when the experiment's default data constructor is MapData

Differential Revision: D41279558

fbshipit-source-id: d7c6b9c886005e68e84737e3163e1a28478ef5cc

Pushed On 15 Nov 2022 at 01:07:51

Mpolson64

Botorch closures (#1191)

Summary: Pull Request resolved: https://github.com/facebook/Ax/pull/1191

X-link: https://github.com/pytorch/botorch/pull/1439

This diff acts as follow-up to the recent model fitting refactor. The previous update focused on the high-level logic used to determine which fitting routines to use for which MLLs. This diff refactors the internal machinery used to evaluate forward-backward passes (producing losses and gradients, respectively) during optimization.

The solution we have opted for is to abstract away the evaluation process by relying on closures. In most cases, these closures are automatically constructed by composing simpler, multiply-dispatched base functions.

Reviewed By: Balandat

Differential Revision: D39101211

fbshipit-source-id: c2058a387fd74058073cfe73c9404d2df2f9b55a

Pushed On 15 Nov 2022 at 12:02:02

Mpolson64

Fix device error in _generate_sobol_points (#1256)

Summary: Pull Request resolved: https://github.com/facebook/Ax/pull/1256

Cuda tensors have to be moved to CPU before calling .numpy().

Reviewed By: dme65

Differential Revision: D41233169

fbshipit-source-id: 75d298a5ab38ac8f3f74e0c7159ca8d8136e65e0

Pushed On 15 Nov 2022 at 12:02:02

Mpolson64

use all metrics for evaluations and data (#1209)

Summary: Pull Request resolved: https://github.com/facebook/Ax/pull/1209

Reviewed By: lena-kashtelyan

Differential Revision: D40390017

fbshipit-source-id: d31990c3af35c3d376ed0702d78b37e2cf0752a7

Pushed On 15 Nov 2022 at 12:02:02

Mpolson64

add optional arm_name arg to attach_trial (#1220)

Summary: Pull Request resolved: https://github.com/facebook/Ax/pull/1220

Allows user to specify the name of the arm in the newly attached trial

Reviewed By: lena-kashtelyan

Differential Revision: D40526695

fbshipit-source-id: ee26a93af71f764b38a2b9106f35f2c5bd254f5e

Pushed On 15 Nov 2022 at 12:02:02

Mpolson64

Add traceback to MetricFetchE repr (#1251)

Summary: Pull Request resolved: https://github.com/facebook/Ax/pull/1251

When you print a MetricFetchE it will calculate and format the traceback of the associated Exception if available. This will help considerably with debugging!

Reviewed By: bernardbeckerman

Differential Revision: D41193445

fbshipit-source-id: 64d6946ee4a84c982a8f1363739a02159d09d2cb

Pushed On 15 Nov 2022 at 12:02:02

Mpolson64

Count ABANDONED trials towards failure rate (#1255)

Summary: Pull Request resolved: https://github.com/facebook/Ax/pull/1255

The scheduler will kill the optimization if too many trials have failed so the user can take a look at whats going on and make changes if needed. This is especially useful in the new setup we have where metric fetch failures can sometimes result in a trial being marked failed or abandoned.

Since both of these states are bad, we should count both towards the max failure rate so the user can be informed and intervene if necessary.

Reviewed By: bernardbeckerman

Differential Revision: D41227143

fbshipit-source-id: 77135b3e5464f73ebcccf2aaacc6277f8bc6db07

Pushed On 15 Nov 2022 at 12:02:02
Pull Request

Mpolson64

Fix Result wrapping in mixed MapMetric/Metric experiments

Created On 14 Nov 2022 at 11:47:16
Create Branch
Mpolson64 In mpolson64/Ax Create Branchexport-D41279558

Mpolson64

Adaptive Experimentation Platform

On 14 Nov 2022 at 11:47:15
Issue Comment

Mpolson64

Metric Registry Question (Hash changes between environments)

Hi, I'm looking to save an experiment with a custom Metric to SQL. I've registered the metric, saved it, and it's been saved in the DB with metric_type = 74810.

I'm creating my SQAConfig like so:

bundle = RegistryBundle(
    metric_clss={MyMetric: None},
    runner_clss={MyRunner: None},
)

config = SQAConfig(
    json_encoder_registry=bundle.encoder_registry,
    json_decoder_registry=bundle.decoder_registry,
    metric_registry=bundle.metric_registry,
    runner_registry=bundle.runner_registry,
) 

When I try to load my experiment I getting the following error:

ax.exceptions.storage.SQADecodeError: Cannot decode SQAMetric because 74810 is an invalid type. ATTENTION: There have been some recent changes to Metric/Runner registration in Ax. Please see https://ax.dev/tutorials/gpei_hartmann_developer.html#9.-Save-to-JSON-or-SQL for the most up-to-date information on saving custom metrics. 

It seems that the hash has changed, because the config.reverse_metric_registry contains {59305: <class 'mymodule.MyMetric'>, 1: <class 'ax.core.metric.Metric'>}.

If I'm understanding correctly, to fix this I can either manually set the hash seed (which doesn't scale well across environments), or I can set the type manually when creating the registry. Is the latter approach ok? I'm scared there may be "gotchas" down the road if I set manually.

I.E.:

bundle = RegistryBundle(
    metric_clss={MyMetric: 2, ...},
    runner_clss={MyRunner: 2,  ...},
)

config = SQAConfig(
    json_encoder_registry=bundle.encoder_registry,
    json_decoder_registry=bundle.decoder_registry,
    metric_registry=bundle.metric_registry,
    runner_registry=bundle.runner_registry,
) 

Thanks in advance

Forked On 14 Nov 2022 at 10:38:29

Mpolson64

Hi @AustinGomez. Go ahead and set that int manually, so you'll wind up with:

bundle = RegistryBundle(
    metric_clss={MyMetric: 74810},
    runner_clss={MyRunner: 12345},
) 

Manually setting is actually the preferred (and safer!) way to go about things, so no need to be worried about gotchas down the line. By setting manually you're protecting yourself from changes to the hash -- providing None and using the classes hash is just a shortcut some of our users requested when we implemented this feature.

Also, feel free to use bundle.sqa_config instead of manually constructing your SQAConfig from the bundle contents; they are equivalent.

Commented On 14 Nov 2022 at 10:38:29
Issue Comment

Mpolson64

Register custom GP class to registry for saving to json

I have a simple custom GP class, similar to the tutorial

#Custom GP class, inherits from a general GPyTorch model from BoTorch and the ExactGP model from GPyTorch
class SimpleCustomGP(ExactGP, GPyTorchModel):
    _num_outputs = 1  # to inform GPyTorchModel API
    def __init__(self, train_X, train_Y):
        # squeeze output dim before passing train_Y to ExactGP
        super().__init__(train_X, train_Y.squeeze(-1), GaussianLikelihood())
        self.mean_module = ConstantMean()
        #Kernel can be customised
        self.covar_module = ScaleKernel(
            base_kernel=MaternKernel(nu=2.5),
        )
        self.to(train_X)  # make sure we're on the right device/dtype
    #returns multivariatenormal with prior mean and covariance evaluated at x
    def forward(self, x):
        mean_x = self.mean_module(x)
        covar_x = self.covar_module(x)
        return MultivariateNormal(mean_x, covar_x) 

and would like to save my ax_client to json with ax_client.save_to_json_file("test.json"), however I get the ValueError

Class "SimpleCustomGP" not in Type[Model] registry, please add it. BoTorch object registries are located in "ax/storage/botorch_modular_registry.py". 

What is the proper procedure for adding my custom class to the registry? Thank you for your help

Forked On 14 Nov 2022 at 10:32:11

Mpolson64

Hi @thschaer -- what you have is actually exactly right. Storage of custom torch components is a difficult task and improving this flow has been on the team's wishlist for a while now. For now what you have is the best way to do things, and we hope to continue making Ax easier to use for both simple and complex use cases in the future :)

Commented On 14 Nov 2022 at 10:32:11
Create Branch
Mpolson64 In mpolson64/Kats Create Branchexport-D41233743

Mpolson64

Kats, a kit to analyze time series data, a lightweight, easy-to-use, generalizable, and extendable framework to perform time series analysis, from understanding the key statistics and characteristics, detecting change points and anomalies, to forecasting future trends.

On 11 Nov 2022 at 10:10:24

Mpolson64

Bump required Ax version to v0.2.9

Created On 11 Nov 2022 at 10:10:24
Create Branch
Mpolson64 In mpolson64/Ax Create Branchexport-D41227143

Mpolson64

Adaptive Experimentation Platform

On 11 Nov 2022 at 07:10:15