Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 35 additions & 3 deletions docs/Quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,9 @@ Quick Start
Examples
--------

Below are three example scripts demonstrating LLaMEA in action for black-box
optimization with a BBOB (24 noiseless) function suite, and one Automated Machine Learning use-case.
Below are four example scripts demonstrating LLaMEA in action for black-box
optimization with a BBOB (24 noiseless) function suite, a multi-objective
optimization workflow, and one Automated Machine Learning use-case.
One of the black-box optimization scripts (`example.py`) runs basic LLaMEA, while the other (`example_HPO.py`) incorporates
a **hyper-parameter optimization** pipeline—known as **LLaMEA-HPO**—that employs
SMAC to tune the algorithm’s parameters in the loop.
Expand Down Expand Up @@ -124,4 +125,35 @@ In this example, a basic classification task on the breast-cancer dataset from s

.. note::
Adjust the model name (`ai_model`) or API key as needed in the script.
You can easily change the dataset, task and evaluation function to fit your needs.
You can easily change the dataset, task and evaluation function to fit your needs.


Running ``multi_objective.py``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

**``multi_objective.py``** demonstrates Pareto-based optimization with
LLaMEA on a synthetic Travelling Salesman Problem variant that optimizes
two conflicting objectives:

- **Distance**: total route length.
- **Fuel**: route cost with load-dependent fuel consumption.

The example highlights:

- Returning a :class:`~llamea.multi_objective_fitness.Fitness` object from
the evaluator.
- Enabling ``multi_objective=True`` in :class:`~llamea.llamea.LLaMEA`.
- Passing ``multi_objective_keys=["Distance", "Fuel"]`` so objective values
are tracked consistently.
- Receiving a :class:`~llamea.pareto_archive.ParetoArchive` and extracting the
final non-dominated set.

How to run:

.. code-block:: bash

python multi_objective.py
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Fix multi_objective run command to include examples path

The new quickstart instruction python multi_objective.py is not runnable from the repository root, because the script lives under examples/multi_objective.py. Users following this section as written will get a file-not-found error and be blocked from running the documented workflow unless they infer an unstated cd examples step.

Useful? React with 👍 / 👎.


.. note::
The script defaults to an Ollama model (``gemma3:12b``). Update the LLM
backend and credentials to match your local setup.
5 changes: 5 additions & 0 deletions docs/llamea.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,9 @@ Recent features include:
instead of entire source files from the LLM. This is more token efficient for large code bases.
* **Population evaluation** – with ``evaluate_population=True`` the evaluation
function ``f`` operates on lists of solutions, allowing batch evaluations.
* **Multi-objective mode** – set ``multi_objective=True`` and provide
``multi_objective_keys=[...]`` to optimize multiple objectives and maintain a
Pareto archive instead of a single best solution.
* **Warm start** -With every iteration, **LLaMEA** archives its latest run in
`<experiment_log_directory>/llamea_config.pkl`. The framework provides
**``warm_start`` class methods** that allow you to resume from a previously
Expand Down Expand Up @@ -52,6 +55,8 @@ The most important keyword arguments of :class:`LLaMEA` are summarised below.
- Prompt engineering controls.
* - ``mutation_prompts`` / ``adaptive_mutation`` / ``adaptive_prompt``
- Mutation and prompt adaptation settings.
* - ``multi_objective`` / ``multi_objective_keys``
- Enable Pareto-based optimization and define objective names.
* - ``budget`` / ``eval_timeout`` / ``max_workers`` / ``parallel_backend``
- Runtime and parallelisation controls.
* - ``log`` / ``experiment_name``
Expand Down
24 changes: 21 additions & 3 deletions examples/multi_objective.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,12 @@
"""Multi-objective LLaMEA example on a synthetic TSP variant.

This script shows how to:
1. Evaluate generated code against two objectives (Distance and Fuel).
2. Return objective values using ``Fitness``.
3. Run LLaMEA with ``multi_objective=True`` and objective keys.
4. Read final non-dominated solutions from ``ParetoArchive``.
"""

import os
import random
from typing import Optional
Expand Down Expand Up @@ -26,6 +35,7 @@ def __repr__(self):


def generate_tsp_test(seed: Optional[int] = None, size: int = 10):
"""Generate a depot and customer set for the synthetic TSP task."""
if seed is not None:
random.seed(seed)
depot = Location(0, 50, 50, 0)
Expand All @@ -46,6 +56,12 @@ def generate_tsp_test(seed: Optional[int] = None, size: int = 10):
referable_dict[customer.id] = customer

def evaluate(solution: Solution, explogger: Optional[ExperimentLogger] = None):
"""Evaluate generated solver code on a two-objective TSP benchmark.

The generated class must return a permutation of customer ids. The evaluator
validates the route, computes total travel distance and load-dependent fuel
usage, then stores a ``Fitness`` object with both objectives.
"""
code = solution.code

global_ns, issues = prepare_namespace(
Expand Down Expand Up @@ -180,7 +196,8 @@ def __call__():
return customer_ids
"""

llamea_inst = LLaMEA(f=evaluate,
# Multi-objective mode returns a Pareto archive instead of a single winner.
llamea_inst = LLaMEA(f=evaluate,
llm=llm,
multi_objective=True,
max_workers=3,
Expand All @@ -192,10 +209,11 @@ def __call__():
example_prompt=example_prompt,
experiment_name="MOO-TSP",
minimization=True,
budget=27
budget=27
)

solutions = llamea_inst.run()
# Keep only the final non-dominated set for reporting/inspection.
if isinstance(solutions, ParetoArchive):
solutions = solutions.get_best()

Expand All @@ -206,4 +224,4 @@ def __call__():
print(solutions.description)
print(solutions.code)
print(solutions.fitness)
print("------------------------------------------------------------------------------------------------------------------------")
print("------------------------------------------------------------------------------------------------------------------------")
8 changes: 8 additions & 0 deletions llamea/llamea.py
Original file line number Diff line number Diff line change
Expand Up @@ -110,6 +110,14 @@ def __init__(
task_prompt (str): A prompt describing the task for the language model to generate optimization algorithms.
example_prompt (str): An example prompt to guide the language model in generating code (or None for default).
output_format_prompt (str): A prompt that specifies the output format of the language model's response.
multi_objective (bool): Enable multi-objective optimization mode.
When set to ``True``, the evaluation function should assign a
:class:`~llamea.multi_objective_fitness.Fitness` object via
:meth:`~llamea.solution.Solution.set_scores`.
multi_objective_keys (list[str]): Ordered objective names used by
the multi-objective pipeline (e.g. ``["Distance", "Fuel"]``).
Each key must be present in every returned
:class:`~llamea.multi_objective_fitness.Fitness` object.
experiment_name (str): The name of the experiment for logging purposes.
elitism (bool): Flag to decide if elitism should be used in the evolutionary process.
HPO (bool): Flag to decide if hyper-parameter optimization is part of the evaluation function.
Expand Down
Loading