Skip to content

Commit

Permalink
Merge pull request #78 from thomaswmorris/refactor-constraints
Browse files Browse the repository at this point in the history
Refactor constraints
  • Loading branch information
thomaswmorris authored Nov 27, 2024
2 parents 34f613b + 72dc293 commit 1e747ea
Show file tree
Hide file tree
Showing 19 changed files with 318 additions and 273 deletions.
10 changes: 5 additions & 5 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,28 +2,28 @@ default_language_version:
python: python3
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
rev: v5.0.0
hooks:
- id: check-yaml
- id: end-of-file-fixer
- id: trailing-whitespace
- repo: https://github.com/ambv/black
rev: 23.1.0
rev: 24.10.0
hooks:
- id: black
language_version: python3
- id: black-jupyter
language_version: python3
- repo: https://github.com/pycqa/flake8
rev: 6.0.0
rev: 7.1.1
hooks:
- id: flake8
- repo: https://github.com/pycqa/isort
rev: 5.12.0
rev: 5.13.2
hooks:
- id: isort
args: ["--profile", "black"]
- repo: https://github.com/kynan/nbstripout
rev: 0.6.1
rev: 0.8.1
hooks:
- id: nbstripout
20 changes: 10 additions & 10 deletions docs/source/tutorials/hyperparameters.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
{
"attachments": {},
"cell_type": "markdown",
"id": "e7b5e13a-c059-441d-8d4f-fff080d52054",
"id": "0",
"metadata": {},
"source": [
"# Hyperparameters\n",
Expand All @@ -14,7 +14,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "22438de8",
"id": "1",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -37,7 +37,7 @@
{
"attachments": {},
"cell_type": "markdown",
"id": "7a88c7bd",
"id": "2",
"metadata": {},
"source": [
"The optimization goes faster if our model understands how the function changes as we change the inputs in different ways. The way it picks up on this is by starting from a general model that could describe a lot of functions, and making it specific to this one by choosing the right hyperparameters. Our Bayesian agent is very good at this, and only needs a few samples to figure out what the function looks like:"
Expand All @@ -46,7 +46,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "7e9c949e",
"id": "3",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -60,7 +60,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "071a829f-a390-40dc-9d5b-ae75702e119e",
"id": "4",
"metadata": {
"tags": []
},
Expand Down Expand Up @@ -97,7 +97,7 @@
{
"attachments": {},
"cell_type": "markdown",
"id": "9ab3be01",
"id": "5",
"metadata": {},
"source": [
"In addition to modeling the fitness of the task, the agent models the probability that an input will be feasible:"
Expand All @@ -106,7 +106,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "bc53bf67",
"id": "6",
"metadata": {
"tags": []
},
Expand All @@ -118,7 +118,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "ebc65169",
"id": "7",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -129,7 +129,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "b70eaf9b",
"id": "8",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -153,7 +153,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.9.20"
},
"vscode": {
"interpreter": {
Expand Down
54 changes: 27 additions & 27 deletions docs/source/tutorials/introduction.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
{
"attachments": {},
"cell_type": "markdown",
"id": "e7b5e13a-c059-441d-8d4f-fff080d52054",
"id": "0",
"metadata": {},
"source": [
"# Introduction (Himmelblau's function)\n",
Expand All @@ -13,7 +13,7 @@
{
"attachments": {},
"cell_type": "markdown",
"id": "c18ef717",
"id": "1",
"metadata": {},
"source": [
"Let's use ``blop`` to minimize Himmelblau's function, which has four global minima:"
Expand All @@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "cf27fc9e-d11c-40f4-a200-98e7814f506b",
"id": "2",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "22438de8",
"id": "3",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -56,7 +56,7 @@
},
{
"cell_type": "markdown",
"id": "2500c410",
"id": "4",
"metadata": {},
"source": [
"There are several things that our agent will need. The first ingredient is some degrees of freedom (these are always `ophyd` devices) which the agent will move around to different inputs within each DOF's bounds (the second ingredient). We define these here:"
Expand All @@ -65,7 +65,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "5d6df7a4",
"id": "5",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -79,7 +79,7 @@
},
{
"cell_type": "markdown",
"id": "54b6f23e",
"id": "6",
"metadata": {},
"source": [
"We also need to give the agent something to do. We want our agent to look in the feedback for a variable called 'himmelblau', and try to minimize it."
Expand All @@ -88,7 +88,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "c8556bc9",
"id": "7",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -100,7 +100,7 @@
{
"attachments": {},
"cell_type": "markdown",
"id": "7a88c7bd",
"id": "8",
"metadata": {},
"source": [
"In our digestion function, we define our objective as a deterministic function of the inputs:"
Expand All @@ -109,7 +109,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "e6bfcf73",
"id": "9",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -123,7 +123,7 @@
{
"attachments": {},
"cell_type": "markdown",
"id": "0d3d91c3",
"id": "10",
"metadata": {},
"source": [
"We then combine these ingredients into an agent, giving it an instance of ``databroker`` so that it can see the output of the plans it runs."
Expand All @@ -132,7 +132,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "071a829f-a390-40dc-9d5b-ae75702e119e",
"id": "11",
"metadata": {
"tags": []
},
Expand All @@ -150,7 +150,7 @@
},
{
"cell_type": "markdown",
"id": "27685849",
"id": "12",
"metadata": {},
"source": [
"Without any data, we can't make any inferences about what the function looks like, and so we can't use any non-trivial acquisition functions. Let's start by quasi-randomly sampling the parameter space, and plotting our model of the function:"
Expand All @@ -159,7 +159,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "996da937",
"id": "13",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -169,7 +169,7 @@
},
{
"cell_type": "markdown",
"id": "dc264346-10fb-4c88-9925-4bfcf0dd3b07",
"id": "14",
"metadata": {},
"source": [
"To decide which points to sample, the agent needs an acquisition function. The available acquisition function are here:"
Expand All @@ -178,7 +178,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "fb06739b",
"id": "15",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -188,7 +188,7 @@
{
"attachments": {},
"cell_type": "markdown",
"id": "ab608930",
"id": "16",
"metadata": {},
"source": [
"Now we can start to learn intelligently. Using the shorthand acquisition functions shown above, we can see the output of a few different ones:"
Expand All @@ -197,7 +197,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "43b55f4f",
"id": "17",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -207,7 +207,7 @@
{
"attachments": {},
"cell_type": "markdown",
"id": "18210f81-0e23-42b7-8589-77dc260e3131",
"id": "18",
"metadata": {},
"source": [
"To decide where to go, the agent will find the inputs that maximize a given acquisition function:"
Expand All @@ -216,7 +216,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "b902172e-e89c-4346-89f3-bf9571cba6b3",
"id": "19",
"metadata": {
"tags": []
},
Expand All @@ -228,7 +228,7 @@
{
"attachments": {},
"cell_type": "markdown",
"id": "9a888385-4e09-4fea-9282-cd6a6fe2c3df",
"id": "20",
"metadata": {},
"source": [
"We can also ask the agent for multiple points to sample and it will jointly maximize the acquisition function over all sets of inputs, and find the most efficient route between them:"
Expand All @@ -237,7 +237,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "28c5c0df",
"id": "21",
"metadata": {
"tags": []
},
Expand All @@ -251,7 +251,7 @@
},
{
"cell_type": "markdown",
"id": "23f3f7ef-c024-4ac1-9144-d0b6fb8a3944",
"id": "22",
"metadata": {},
"source": [
"All of this is automated inside the ``learn`` method, which will find a point (or points) to sample, sample them, and retrain the model and its hyperparameters with the new data. To do 4 learning iterations of 8 points each, we can run"
Expand All @@ -260,7 +260,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "ff1c5f1c",
"id": "23",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -269,7 +269,7 @@
},
{
"cell_type": "markdown",
"id": "b52f3352-3b67-431c-b5af-057e02def5ba",
"id": "24",
"metadata": {},
"source": [
"Our agent has found all the global minima of Himmelblau's function using Bayesian optimization, and we can ask it for the best point: "
Expand All @@ -278,7 +278,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "0d5cc0c8-33cf-4fb1-b91c-81828e249f6a",
"id": "25",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -303,7 +303,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.0"
"version": "3.9.20"
},
"vscode": {
"interpreter": {
Expand Down
Loading

0 comments on commit 1e747ea

Please sign in to comment.