Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
168 changes: 143 additions & 25 deletions 1_foundations/1_lab1.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -96,9 +96,20 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Next it's time to load the API keys into environment variables\n",
"# If this returns false, see the next cell!\n",
Expand Down Expand Up @@ -141,9 +152,17 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 3,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI API Key exists and begins sk-proj-\n"
]
}
],
"source": [
"# Check the key - if you're not using OpenAI, check whichever key you're using! Ollama doesn't need a key.\n",
"\n",
Expand All @@ -159,7 +178,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -172,7 +191,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -189,6 +208,13 @@
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"# Create a list of messages in the familiar OpenAI format\n",
"\n",
Expand All @@ -197,9 +223,17 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 7,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"2 + 2 equals 4.\n"
]
}
],
"source": [
"# And now call it! Any problems, head to the troubleshooting guide\n",
"# This uses GPT 4.1 nano, the incredibly cheap model\n",
Expand All @@ -216,7 +250,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -228,9 +262,17 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 9,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"If five machines take five minutes to make five widgets, how long would it take 100 machines to make 100 widgets?\n"
]
}
],
"source": [
"# ask it - this uses GPT 4.1 mini, still cheap but more powerful than nano\n",
"\n",
Expand All @@ -246,7 +288,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -256,9 +298,36 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 11,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Let's analyze the problem step-by-step:\n",
"\n",
"**Given:**\n",
"- 5 machines take 5 minutes to make 5 widgets.\n",
"\n",
"**Step 1: Determine the rate of one machine.**\n",
"\n",
"If 5 machines make 5 widgets in 5 minutes, then:\n",
"\n",
"- Total widgets made per minute by 5 machines = 5 widgets / 5 minutes = 1 widget per minute.\n",
"- Therefore, 1 machine makes 1/5 widget per minute.\n",
"\n",
"**Step 2: Calculate the time for 100 machines to make 100 widgets.**\n",
"\n",
"- 100 machines will make 100 × (1/5) = 20 widgets per minute.\n",
"- To make 100 widgets, time required = 100 widgets / 20 widgets per minute = 5 minutes.\n",
"\n",
"**Answer:**\n",
"\n",
"It would take **5 minutes** for 100 machines to make 100 widgets.\n"
]
}
],
"source": [
"# Ask it again\n",
"\n",
Expand All @@ -273,9 +342,41 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 12,
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/markdown": [
"Let's analyze the problem step-by-step:\n",
"\n",
"**Given:**\n",
"- 5 machines take 5 minutes to make 5 widgets.\n",
"\n",
"**Step 1: Determine the rate of one machine.**\n",
"\n",
"If 5 machines make 5 widgets in 5 minutes, then:\n",
"\n",
"- Total widgets made per minute by 5 machines = 5 widgets / 5 minutes = 1 widget per minute.\n",
"- Therefore, 1 machine makes 1/5 widget per minute.\n",
"\n",
"**Step 2: Calculate the time for 100 machines to make 100 widgets.**\n",
"\n",
"- 100 machines will make 100 × (1/5) = 20 widgets per minute.\n",
"- To make 100 widgets, time required = 100 widgets / 20 widgets per minute = 5 minutes.\n",
"\n",
"**Answer:**\n",
"\n",
"It would take **5 minutes** for 100 machines to make 100 widgets."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"from IPython.display import Markdown, display\n",
"\n",
Expand Down Expand Up @@ -318,29 +419,46 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 23,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"ChatCompletion(id='chatcmpl-Cm2P1aNWV2lKZEwVTojSUKPipojmE', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Proposed Agentic AI Solution for Enhancing Trust and Control in Autonomous Process Management\\n\\nOverview:\\nDevelop a comprehensive, multi-layered Agentic AI framework designed to prioritize transparency, safety, seamless integration, security, and user acceptance. This solution leverages advanced technologies to provide enterprises with confidence in autonomous decision-making, ensuring operational reliability while maintaining human oversight.\\n\\nKey Components:\\n\\n1. Transparent Explainability Module\\n - Dynamic Explanation Interface:\\n - Provides real-time, human-understandable explanations for each autonomous decision or action.\\n - Uses visualizations, natural language summaries, and context-specific insights.\\n - Decision Rationale Tracking:\\n - Logs the reasoning process, including data inputs, model considerations, and decision pathways.\\n - Enables auditors and stakeholders to review legacy decisions for compliance and accountability.\\n\\n2. Robust Safety and Fail-Safe Mechanisms\\n - Confidence Scoring:\\n - Assigns confidence levels to decisions, flagging low-confidence actions for human review.\\n - Automated Safeguards:\\n - Incorporates rule-based checks and constraints aligned with enterprise policies and regulatory requirements.\\n - Human-in-the-Loop Oversight:\\n - Provides interfaces for human supervisors to approve, modify, or override autonomous actions before execution.\\n\\n3. Secure Integration and Data Management\\n - Secure API Gateways:\\n - Ensures secure, controlled access to existing legacy systems and workflows.\\n - Data Privacy Layers:\\n - Implements encryption, anonymization, and access controls to protect sensitive enterprise data.\\n - Continuous Monitoring:\\n - Tracks data flows and system interactions for anomalies or security breaches.\\n\\n4. Audit Trails and Compliance\\n - Comprehensive Logging:\\n - Maintains immutable records of decisions, actions, justifications, and overrides.\\n - Compliance Dashboard:\\n - Offers real-time compliance status, audit reports, and alerts for potential violations.\\n\\n5. User-Centric Interface and Change Management\\n - Intuitive Oversight Dashboards:\\n - Visualizes ongoing autonomous operations, decision rationales, and system status.\\n - Education and Training Modules:\\n - Facilitates workforce understanding of AI processes to foster acceptance.\\n - Feedback Loop:\\n - Gathers user feedback to continuously improve explainability, safety protocols, and usability.\\n\\n6. Adaptive Learning and Continuous Improvement\\n - Performance Monitoring:\\n - Tracks accuracy, error rates, and decision quality over time.\\n - Model Retraining and Adjustment:\\n - Uses feedback and new data to refine autonomous decision models, enhancing reliability.\\n\\nDeployment Strategy:\\n- Pilot Phase:\\n Test the framework in controlled environments, gather feedback, and calibrate explainability and safety features.\\n- Incremental Rollout:\\n Gradually integrate with existing workflows, ensuring minimal disruption and maximum acceptance.\\n- Continuous Monitoring and Iteration:\\n Regularly review system performance, user feedback, and compliance to iteratively enhance trustworthiness.\\n\\nExpected Outcomes:\\n- Increased transparency and confidence in autonomous decisions.\\n- Reduced risk of errors and regulatory violations through fail-safe mechanisms.\\n- Seamless integration with legacy systems via secure APIs.\\n- Empowered workforce with better oversight tools and understanding of AI actions.\\n- Enhanced enterprise trust enabling broader adoption of agentic AI solutions.\\n\\nBy embedding these features into an agentic AI platform, enterprises can effectively address trust and control challenges, unlocking the full potential of autonomous process management with confidence and oversight.', refusal=None, role='assistant', annotations=[], audio=None, function_call=None, tool_calls=None))], created=1765565015, model='gpt-4.1-nano-2025-04-14', object='chat.completion', service_tier='default', system_fingerprint='fp_7f8eb7d1f9', usage=CompletionUsage(completion_tokens=669, prompt_tokens=264, total_tokens=933, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0)))\n"
]
}
],
"source": [
"# First create the messages:\n",
"\n",
"messages = [{\"role\": \"user\", \"content\": \"Something here\"}]\n",
"messages1 = [{\"role\": \"user\", \"content\": \"Sugget a bussinees area for Agentic AI\"}]\n",
"\n",
"# Then make the first call:\n",
"\n",
"response =\n",
"response1= openai.chat.completions.create(model=\"gpt-4.1-mini\", messages=messages1)\n",
"\n",
"# Then read the business idea:\n",
"\n",
"business_idea = response.\n",
"business_area = response1.choices[0].message.content\n",
"messages2 = [{\"role\": \"user\", \"content\": f\"What is a major pain-point in {business_area}?\"}]\n",
"response2 = openai.chat.completions.create(model=\"gpt-4.1-nano\", messages=messages2)\n",
"\n",
"# And repeat! In the next message, include the business idea within the message"
"# Step 3: Ask for an Agentic AI solution\n",
"pain_point = response2.choices[0].message.content\n",
"messages3 = [{\"role\": \"user\", \"content\": f\"Propose an Agentic AI solution for: {pain_point}\"}]\n",
"response3 = openai.chat.completions.create(model=\"gpt-4.1-nano\", messages=messages3)\n",
"\n",
"print(response3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
"source": [
"\n"
]
}
],
"metadata": {
Expand All @@ -359,7 +477,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.9"
"version": "3.12.12"
}
},
"nbformat": 4,
Expand Down
Loading