Want us to integrate Almond Bot into your workflow? Let our team handle the integration and programming for your specific use case. Contact us to get started.
tldr;
We’ve integrated with the best hardware, vision, and compute with the most popular software libraries to make automation a breeze. Access waypoints, AprilTags, object detection, end-to-end AI and more with a single API.
With Almond Bot, you can:
Program basic tasks with waypoints
Detect & move relative to AprilTags
Detect & classify objects
Automate complex tasks with AI
Bot API
Our API exposes low and high level control of Almond Bot from specifying joint angles and cartesian pose to tele-op to running end-to-end AI models. Take a look at a few examples below to see what is possible with just a few lines of code.
Examples
Basic Automation
Move to set points.
from almond.client import AlmondBotClient, Pose
bot = AlmondBotClient()
await bot.connect()
await bot.open_tool()
await bot.set_tool_pose(Pose( x = 0.1 , y = 0.2 , z = 0.3 , roll = 0 , pitch = 0 , yaw = 0 ))
await bot.close_tool()
await bot.set_tool_pose(Pose( z = 10 ), is_offset = True )
AprilTag
Align with tags around your factory, and then perform relative movements.
from almond.client import AlmondBotClient, Pose
bot = AlmondBotClient()
await bot.connect()
await bot.align_with_apriltag( id = 0 , size = 100 )
await bot.open_tool()
await bot.set_tool_pose(Pose( z = 10 ), is_offset = True )
await bot.close_tool()
await bot.move_arc( r = 500 , Pose( x =- 100 , y =- 100 ), is_offset = True )
success = await bot.verify_scene( "door is open" )
Object Detection
Contact us to build a custom object detection model for your use case.
from almond.client import AlmondBotClient, Pose
bot = AlmondBotClient()
await bot.connect()
poses = await bot.detect_poses( "widgets" )
for pose in poses:
await bot.open_tool()
await bot.set_tool_pose(pose, is_offset = True )
await bot.close_tool()
await bot.set_tool_pose(Pose( x = 100 , y = 100 , z = 50 ))
await bot.open_tool()
success = await bot.verify_scene( "all widgets in box" )
Develop AI directly with our LeRobot fork or use our simple APIs to collect data, train, and run models.
Collect Data
from almond.client import AlmondBotClient, Mode
bot = AlmondBotClient()
await bot.connect()
# Enable robot teleop
await bot.set_mode(Mode. TELEOPERATION )
await bot.record_episode(
task_name = "pickup_widget" ,
duration_seconds = 60
)
Train Model
from almond.client import AlmondBotClient, AIModel
bot = AlmondBotClient()
await bot.connect()
training = await bot.train_task(
task_name = "pickup_widget" ,
training_name = "my_pi0_training" ,
model = AIModel. PI0
)
Run Model
from almond.client import AlmondBotClient
bot = AlmondBotClient()
await bot.connect()
await bot.run_task(
task_name = "pickup_widget" ,
training_name = "my_pi0_training"
)
success = await bot.verify_scene(
"the widget is in the container"
)