Skip to content

Commit

Permalink
Merge pull request #1 from nasa-jpl/feature/demo
Browse files Browse the repository at this point in the history
Update TurtleSim demo. Minor changes to ROSA base class.
  • Loading branch information
RobRoyce authored Aug 10, 2024
2 parents 958b1c9 + e7c7692 commit 6300d8e
Show file tree
Hide file tree
Showing 11 changed files with 347 additions and 199 deletions.
3 changes: 3 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,9 @@ RUN echo "export ROSLAUNCH_SSH_UNKNOWN=1" >> /root/.bashrc
COPY . /app/
WORKDIR /app/

# Uncomment this line to test with local ROSA package
# RUN python3.9 -m pip install --user -e .

# Run roscore in the background, then run `rosrun turtlesim turtlesim_node` in a new terminal, finally run main.py in a new terminal
CMD /bin/bash -c 'source /opt/ros/noetic/setup.bash && \
roscore & \
Expand Down
102 changes: 79 additions & 23 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,42 +1,98 @@
# ROS Agent (ROSA)

ROSA is an AI agent that can be used to interact with ROS (Robot Operating System) and perform various tasks.
It is built using the Langchain framework and the [ROS](https://www.ros.org/) framework.
It is built using [Langchain](https://python.langchain.com/v0.2/docs/introduction/) and the
[ROS](https://www.ros.org/) framework.

## Installation

Requirements:
- Python 3.9 or higher
- ROS Noetic (or higher)

**Note:** ROS Noetic uses Python3.8, but LangChain requires Python3.9 or higher. To use ROSA with ROS Noetic,
you will need to create a virtual environment with Python3.9 or higher and install ROSA in that environment.

Use pip to install ROSA:

```bash
pip install jpl-rosa
pip3 install jpl-rosa
```

**Important:** ROS Noetic runs on Python 3.8, but LangChain is only available for Python >= 3.9. So you will
need to install Python3.9 separately, and run ROSA outside the ROS environment. This restriction is not true
for ROS2 variants.
# TurtleSim Demo
We have included a demo that uses ROSA to control the TurtleSim robot in simulation. To run the demo, you will need
to have Docker installed on your machine.

## Setup

# TurtleSim Demo
We have included a demo that uses ROSA to control the TurtleSim simulator.
1. Clone this repository
2. Configure the LLM in `src/turtle_agent/scripts/llm.py`
3. Run the demo script: `./demo.sh`
4. Start ROSA in the new Docker session: `catkin build && source devel/setup.bash && roslaunch turtle_agent agent`
5. Run example queries: `examples`

## Configure your LLM
You will need to configure your LLM by setting the environment variables found in `.env`. You will also need
to ensure the correct LLM is configured in the `src/turtle_agent/turtle_agent.py` file, specifically in the
`get_llm()` function.

After that is configured properly, you can run the demo using the following command:
# Adapting ROSA for Your Robot

```bash
./demo.sh
```
ROSA is designed to be easily adaptable to different robots and environments. To adapt ROSA for your robot, you will
can either (1) create a new class that inherits from the `ROSA` class, or (2) create a new instance of the `ROSA` class
and pass in the necessary parameters. The first option is recommended if you need to make significant changes to the
agent's behavior, while the second option is recommended if you want to use the agent with minimal changes.

The above command will start Docker and launch the turtlesim node. To start ROSA, you can run the following command
the new Docker session:
In either case, ROSA is adapted by providing it with a new set of tools and/or prompts. The tools are used to interact
with the robot and the ROS environment, while the prompts are used to guide the agents behavior.

```bash
catkin build && source devel/setup.bash && roslaunch turtle_agent agent
```
## Adding Tools
There are two methods for adding tools to ROSA:
1. Pass in a list of @tool functions using the `tools` parameter.
2. Pass in a list of Python packages containing @tool functions using the `tool_packages` parameter.

## Example Queries
After launching the agent, you can get a list of example queries by typing `examples` in the terminal.
You can then run any of the example queries by typing the query number (e.g. 2) and pressing enter.
The first method is recommended if you have a small number of tools, while the second method is recommended if you have
a large number of tools or if you want to organize your tools into separate packages.

**Hint:** check `src/turtle_agent/scripts/turtle_agent.py` for examples on how to use both methods.

## Adding Prompts
To add prompts to ROSA, you need to create a new instance of the `RobotSystemPrompts` class and pass it to the `ROSA`
constructor using the `prompts` parameter. The `RobotSystemPrompts` class contains the following attributes:

- `embodiment_and_persona`: Gives the agent a sense of identity and helps it understand its role.
- `about_your_operators`: Provides information about the operators who interact with the robot, which can help the agent
understand the context of the interaction.
- `critical_instructions`: Provides critical instructions that the agent should follow to ensure the safety and
well-being of the robot and its operators.
- `constraints_and_guardrails`: Gives the robot a sense of its limitations and informs its decision-making process.
- `about_your_environment`: Provides information about the physical and digital environment in which the robot operates.
- `about_your_capabilities`: Describes what the robot can and cannot do, which can help the agent understand its
limitations.
- `nuance_and_assumptions`: Provides information about the nuances and assumptions that the agent should consider when
interacting with the robot.
- `mission_and_objectives`: Describes the mission and objectives of the robot, which can help the agent understand its
purpose and goals.
- `environment_variables`: Provides information about the environment variables that the agent should consider when
interacting with the robot. e.g. $ROS_MASTER_URI, or $ROS_IP.

## Example
Here is a quick and easy example showing how to add new tools and prompts to ROSA:
```python
from langchain.agents import tool
from rosa import ROSA, RobotSystemPrompts

@tool
def move_forward(distance: float) -> str:
"""
Move the robot forward by the specified distance.
:param distance: The distance to move the robot forward.
"""
# Your code here ...
return f"Moving forward by {distance} units."

prompts = RobotSystemPrompts(
embodiment_and_persona="You are a cool robot that can move forward."
)

llm = get_your_llm_here()
rosa = ROSA(ros_version=1, llm=llm, tools=[move_forward])
rosa.invoke("Move forward by 2 units.")
```
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@

setup(
name="jpl-rosa",
version="1.0.0",
version="1.0.1",
license="Apache 2.0",
description="ROSA: the Robot Operating System Agent",
long_description=long_description,
Expand Down
18 changes: 12 additions & 6 deletions src/rosa/prompts.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,11 +16,18 @@


class RobotSystemPrompts:
def __init__(self, embodiment_and_persona: Optional[str], about_your_operators: Optional[str],
critical_instructions: Optional[str], constraints_and_guardrails: Optional[str],
about_your_environment: Optional[str], about_your_capabilities: Optional[str],
nuance_and_assumptions: Optional[str], mission_and_objectives: Optional[str],
environment_variables: Optional[dict] = None):
def __init__(
self,
embodiment_and_persona: Optional[str] = None,
about_your_operators: Optional[str] = None,
critical_instructions: Optional[str] = None,
constraints_and_guardrails: Optional[str] = None,
about_your_environment: Optional[str] = None,
about_your_capabilities: Optional[str] = None,
nuance_and_assumptions: Optional[str] = None,
mission_and_objectives: Optional[str] = None,
environment_variables: Optional[dict] = None
):
self.embodiment = embodiment_and_persona
self.about_your_operators = about_your_operators
self.critical_instructions = critical_instructions
Expand All @@ -31,7 +38,6 @@ def __init__(self, embodiment_and_persona: Optional[str], about_your_operators:
self.mission_and_objectives = mission_and_objectives
self.environment_variables = environment_variables


def as_message(self) -> tuple:
"""Return the robot prompts as a tuple of strings for use with OpenAI tools."""
return "system", str(self)
Expand Down
50 changes: 25 additions & 25 deletions src/rosa/rosa.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.

import os
from langchain.agents import AgentExecutor
from langchain.agents.format_scratchpad.openai_tools import format_to_openai_tool_messages
from langchain.agents.output_parsers.openai_tools import OpenAIToolsAgentOutputParser
Expand All @@ -21,9 +22,7 @@
from langchain_openai import AzureChatOpenAI, ChatOpenAI
from langchain_community.callbacks import get_openai_callback
from rich import print
from rich.console import Console
from typing import Literal, Union, Optional
from rich.markdown import Markdown

try:
from .prompts import system_prompts, RobotSystemPrompts
Expand All @@ -39,11 +38,10 @@ class ROSA:
Args:
ros_version: The version of ROS that the agent will interact with. This can be either 1 or 2.
llm: The language model to use for generating responses. This can be either an instance of AzureChatOpenAI
or ChatOpenAI.
robot_tools: A list of ROS tools to use with the agent. This can be a list of ROS tools from the ROSATools class.
robot_prompts: A list of prompts to use with the agent. This can be a list of prompts from the RobotSystemPrompts
class.
llm: The language model to use for generating responses. This can be either an instance of AzureChatOpenAI or ChatOpenAI.
tools: A list of LangChain tool functions to use with the agent.
tool_packages: A list of Python packages that contain LangChain tool functions to use with the agent.
robot_prompts: A list of prompts to use with the agent. This can be a list of prompts from the RobotSystemPrompts class.
verbose: A boolean flag that indicates whether to print verbose output.
blacklist: A list of ROS tools to exclude from the agent. This can be a list of ROS tools from the ROSATools class.
accumulate_chat_history: A boolean flag that indicates whether to accumulate chat history.
Expand All @@ -54,8 +52,9 @@ def __init__(
self,
ros_version: Literal[1, 2],
llm: Union[AzureChatOpenAI, ChatOpenAI],
robot_tools: Optional[list] = None,
robot_prompts: Optional[RobotSystemPrompts] = None,
tools: Optional[list] = None,
tool_packages: Optional[list] = None,
prompts: Optional[RobotSystemPrompts] = None,
verbose: bool = False,
blacklist: Optional[list] = None,
accumulate_chat_history: bool = True,
Expand All @@ -69,17 +68,16 @@ def __init__(
self.__show_token_usage = show_token_usage
self.__blacklist = blacklist if blacklist else []
self.__accumulate_chat_history = accumulate_chat_history
self.__tools = self.__get_tools(ros_version, robot_tools, self.__blacklist)
self.__prompts = self.__get_prompts(robot_prompts)
self.__tools = self.__get_tools(ros_version, packages=tool_packages, tools=tools, blacklist=self.__blacklist)
self.__prompts = self.__get_prompts(prompts)
self.__llm_with_tools = llm.bind_tools(self.__tools.get_tools())
self.__agent = self.__get_agent()
self.__executor = self.__get_executor(verbose=verbose)

def clear_chat_history(self):
pass

def clear_screen(self):
pass
def clear_chat(self):
"""Clear the chat history."""
self.__chat_history = []
os.system("clear")

def invoke(self, query: str) -> str:
"""Invoke the agent with a user query."""
Expand Down Expand Up @@ -119,17 +117,19 @@ def __get_executor(self, verbose: bool):

def __get_agent(self):
agent = ({
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_tool_messages(x["intermediate_steps"]),
"chat_history": lambda x: x["chat_history"],
} | self.__prompts | self.__llm_with_tools | OpenAIToolsAgentOutputParser())
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_tool_messages(x["intermediate_steps"]),
"chat_history": lambda x: x["chat_history"],
} | self.__prompts | self.__llm_with_tools | OpenAIToolsAgentOutputParser())
return agent

def __get_tools(self, ros_version: Literal[1, 2], robot_tools: Optional[list], blacklist: Optional[list]):
tools = ROSATools(ros_version, blacklist=blacklist)
if robot_tools:
tools.add(robot_tools, blacklist=blacklist)
return tools
def __get_tools(self, ros_version: Literal[1, 2], packages: Optional[list], tools: Optional[list], blacklist: Optional[list]):
rosa_tools = ROSATools(ros_version, blacklist=blacklist)
if tools:
rosa_tools.add_tools(tools)
if packages:
rosa_tools.add_packages(packages, blacklist=blacklist)
return rosa_tools

def __get_prompts(self, robot_prompts: Optional[RobotSystemPrompts] = None):
prompts = system_prompts
Expand Down
27 changes: 20 additions & 7 deletions src/rosa/tools/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,7 @@ class ROSATools:
def __init__(self, ros_version: Literal[1, 2], blacklist: Optional[List[str]] = None):
self.__tools: list = []
self.__ros_version = ros_version
self.__blacklist = blacklist

# Add the default tools
from . import calculation, log, ros1, ros2, system
Expand All @@ -79,6 +80,13 @@ def __init__(self, ros_version: Literal[1, 2], blacklist: Optional[List[str]] =
def get_tools(self) -> List[Tool]:
return self.__tools

def __add_tool(self, tool):
if hasattr(tool, 'name') and hasattr(tool, 'func'):
if self.__blacklist and 'blacklist' in tool.func.__code__.co_varnames:
# Inject the blacklist into the tool function
tool.func = inject_blacklist(self.__blacklist)(tool.func)
self.__tools.append(tool)

def __iterative_add(self, package, blacklist: Optional[List[str]] = None):
"""
Iterate through a package and add each @tool to the tools list.
Expand All @@ -89,17 +97,22 @@ def __iterative_add(self, package, blacklist: Optional[List[str]] = None):
for tool_name in dir(package):
if not tool_name.startswith("_"):
t = getattr(package, tool_name)
if hasattr(t, 'name') and hasattr(t, 'func'):
if blacklist and 'blacklist' in t.func.__code__.co_varnames:
# Inject the blacklist into the tool function
t.func = inject_blacklist(blacklist)(t.func)
self.__tools.append(t)
self.__add_tool(t)

def add(self, tool_packages: List, blacklist: Optional[List[str]] = None):
def add_packages(self, tool_packages: List, blacklist: Optional[List[str]] = None):
"""
Add a list of tools to the Tools object.
Add a list of tools to the Tools object by iterating through each package.
:param tool_packages: A list of tool packages to add to the Tools object.
"""
for pkg in tool_packages:
self.__iterative_add(pkg, blacklist=blacklist)

def add_tools(self, tools: list):
"""
Add a single tool to the Tools object.
:param tools: A list of tools to add
"""
for tool in tools:
self.__add_tool(tool)
1 change: 0 additions & 1 deletion src/turtle_agent/requirements.txt

This file was deleted.

52 changes: 52 additions & 0 deletions src/turtle_agent/scripts/llm.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# Copyright (c) 2024. Jet Propulsion Laboratory. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import dotenv
import os
from azure.identity import ClientSecretCredential, get_bearer_token_provider
from langchain_openai import AzureChatOpenAI


def get_llm():
"""A helper function to get the LLM instance."""
dotenv.load_dotenv(dotenv.find_dotenv())

APIM_SUBSCRIPTION_KEY = os.getenv("APIM_SUBSCRIPTION_KEY")
default_headers = {}
if APIM_SUBSCRIPTION_KEY != None:
# only set this if the APIM API requires a subscription...
default_headers["Ocp-Apim-Subscription-Key"] = APIM_SUBSCRIPTION_KEY

# Set up authority and credentials for Azure authentication
credential = ClientSecretCredential(
tenant_id=os.getenv("AZURE_TENANT_ID"),
client_id=os.getenv("AZURE_CLIENT_ID"),
client_secret=os.getenv("AZURE_CLIENT_SECRET"),
authority="https://login.microsoftonline.com",
)

token_provider = get_bearer_token_provider(
credential, "https://cognitiveservices.azure.com/.default"
)

llm = AzureChatOpenAI(
azure_deployment=os.getenv("DEPLOYMENT_ID"),
azure_ad_token_provider=token_provider,
openai_api_type="azure_ad",
api_version=os.getenv("API_VERSION"),
azure_endpoint=os.getenv("API_ENDPOINT"),
default_headers=default_headers
)

return llm
Loading

0 comments on commit 6300d8e

Please sign in to comment.