Skip to content

2. Running ROScribe

RoboCoachian edited this page Apr 30, 2024 · 2 revisions

The following steps show how to use ROScribe in order to create your own ROS package using natural language interface:

  • Setup your OpenAI API key: export OPENAI_API_KEY=[your api key]

  • Start ROScribe by typing roscribe in the terminal.

  • SpecAgent is your assistant to setup the ROS graph for your project. Briefly describe the robot software you want to deploy, while SpecAgent asks you high-level questions regarding your deployment.

  • Once SpecAgent learns about your project, it shows you a list of ROS nodes and topics that are involved in your software. The subscriber/publisher relationship between the ROS nodes can be visualized similar to RQT graph.

  • After you finalize the node list, SpecAgent will pass your project to GenAgent in order to get started with implementing your ROS nodes.

  • You can generate python ROS node implementation with the help of GenAgent. Alternatively, GenAgent can download open-source respositories instead of generating the code itself.

  • After GenAgent finishes code generation, your generated ROS workspace will be provided to PackAgent in order to generate ROS launch file, package.xml, CMakeLists.txt, and README.md.

  • You can install your ROS package by using catkin. After installation, source your workspace in order to link to your ROS packages.

  • You can launch your ROS package by using roslaunch command and calling the generated launch file.

  • If you have any problems, SupportAgent will help you with solving any issues related to your project.

  • Keep in mind that each agent maintains a persistent state of your project; hence you can go back and resume working with ROScribe after ending a previous session.

Here is an overview of how ROScribe and its agents handle your project:


Watch this demo to see how ROScribe works:

ROScribe uses OpenAI's gpt-3.5-turbo-16k as the default LLM. You can switch to other supported models by LangChain; however you might have to get the API for each model or run them locally.

Clone this wiki locally