Building a more accessible robotics platform

Hello World,

I’m Cristian, I make things (move). Before joining Balena, I worked turning OpenEyes into a product. It never hit the market, and never will, but I gained a lot of experience with edge machine learning, robotics, haptic feedback and the ROS stack.

At my first interview for Balena, @phil-d-wilson dropped a hint about integrating robotics tools such as ROS with the Balena ecosystem. That’s when I started thinking about what became this project.

ROS is the de-facto standard for everything robotics nowadays. It’s a mature ecosystem that contains communication protocols, tools, libraries, and even a powerful simulation environment.

OSRF sells a robot kit that was developed around ROS, called the Turtlebot. Our first idea was to simply make sure our fleets work perfectly with turtlebot. Their only offering that matches most of our requirements is the Turtlebot3 Waffle, which comes with an already outdated Raspberry Pi 3, and no ML acceleration at 1499$.

So obviously, I decided to design my own. As low cost as possible, but still capable to run state of the art robot libraries and neural network based computer vision. In other words, we want you to be able to play with the same software stack as the big boys, on a kit that’s less expensive than a gaming console.

Functionality

Mapping and Navigation

SLAM (Simultaneous Localization an d Mapping) is a general term for the computational task of constructing a map while also keeping track of the agent’s position within the map. For the sake of simplicity, we’ll be talking about Visual SLAM. This approach localizes agents by recognizing landmarks of spaces already explored.

In order to keep resource consumption at a minimum, this implementation uses a 2D LIDAR based Visual SLAM solution using the Hector SLAM ROS package. gmapping is another popular alternative.

3D SLAM is also a possibility with the addition of an Intel Realsense camera, but will require more computational resources. rtabmap is a great option for mapping with an RGB-D camera. However, we recommend upgrading the SBC to something like Nvidia Xavier NX if 3D SLAM is required for your use case.

Deep Learning

Deep Learning is a huge and evolving field but for the scope of this project, we’ll focus on convolutional neural networks aimed at object detection and segmentation. Of course, as long as the hardware platform supports it, custom models and networks could be used.

Object detection is the task of identifying different classes of objects, labeling them, and providing a bounding box, the region in the original image where the object was found. These coordinates could be processed and allow the robot to follow or grab a specific object.

Some of the options that can run on embedded hardware are SSD (Single Shot Detector) , ResNet, MobileNet and YOLO.

Segmentation is kind of the reverse of object detection. This approach takes every pixel in an image, and tries to place it in one of the classes it was trained on. In other words, every pixel has to belong to “something”. Segmentation usually outputs a mask, a representation of the original image where each class is presented in a different color. This can obviously be further processed to enable the robot to do specific actions.

Compared to Object Detection, this has the advantage that everything must belong to one class, even the ground plane and walls, which could be very useful information for specific tasks.

These are just the core tasks that we envision running on this hardware. For example @andrewnhem would like to use the platform for autonomous gardening, while @rahul-thakoor thinks of adding an SDR, and creating maps of RF sources around the robot’s environment. We’d be more than happy to hear more ideas like these and even help you implement them.

Requirements

To implement all the intended features, we need our platform to check the following hardware requirements:

  1. “Rolling” bot platform (with optional wheel encoders)
  2. The system must have a SBC with AI acceleration, either built-in to the SoC (like the Nvidia Jetson Nano) , or with a USB/PCie dongle (like Google Coral or Intel NCS2).
  3. The system must have an RGB Camera
  4. The system must have a LIDAR sensor to perform SLAM.
  5. The system must be able to avoid obstacles automatically.
  6. The battery system should be able to sustain all of the components running at the same time
  7. The total system should cost around 300-400 Euros.

So, what’s next?

Stay tuned for the next update where we’ll explore the component choice and CAD design.

But first, here’s a teaser of the finished prototype.

What do you all think? Are you working on something similar? Let me know! I’m interested in feedback, other ideas, and to meet other robotics addicts on our Forums.

5 Likes

Yes @dragomir! I can’t wait to follow along on this thread!

1 Like

Likewise, looking forward to building this one myself!

Components

SBC


Since the ability of running neural networks is one of the core features of this build, we obviously need a NPU (Neural Processing Units). These are specialized chips that run matrix multiplication, gradient descent and other building blocks of neural networks. A few of the options on the market include Google’s Coral NPU, and Intel NCS2. While both are phenomenal devices, they both cost around 70€, and connect to the SBC using USB 3. Together with an SBC like the Raspberry Pi 4, the cost goes to over 100€, which is a bit too much for this build.

However, Nvidia recently released the 2GB version of Jetson Nano. At around 60 euros, this is the cheapest solution on the market that offers neural net acceleration along with a modern 4-core ARM64 CPU.

While most NPUs on the market come with their own proprietary SDKs, the Jetson Nano has 128 standard CUDA Cores, meaning that networks written in frameworks like Tensorflow, PyTorch or Darknet can be deployed with minimal modifications.

Motors

According to initial calculations the full robot weighs about 700g. Back in the day, I used to build Mini Sumo robots for competitions. The rules of such competitions impose a weight limit of 500g.

enter image description here

During these times I found the Pololu 50:1 Micro Gear Motors to be a perfect balance between torque and speed. Since the weight was quite similar I decided to use the same motors, and they are great. Additionally, each one of the motors is equipped with a hall-effect encoder.

A a control loop such as PID can use the encoder outputs to enable extremely precise control of the movement, which increases the accuracy of SLAM, Mapping and Navigation.

Motor Driver

Most robots like this, have an Arduino, or compatible microcontroller, that handles motor control, encoders, servos, and other systems. However, this adds another layer of software that is outside of our direct control. It is crucial to be able to update all software through Balena. Although a tad bit expensive, Sparkfun’s Qwiic Motor Driver is controllable via I2C/UART, which makes it perfect for our use case.

There’s lots of other I2C motor controllers with similar specs, my selection was purely based on availability and stock, so feel to use whatever motor you want as long as it fits the power requirements of your motors. Here are some options that work well with the Pololu Micro Motors :

Power

Powering the platform up is extremely simple.
Two 18650 Li-Ion Cells in series are wired directly into the motor driver and to the 5V power regulators which power the other components. Together, the two regulators can handle loads up to 6.5A. That’s more than enough to power the SBC, LIDAR and all other peripherals and leave some room for expansion.

I’ll follow up with battery life benchmarks later. For now, I know that the current 3400mAh capacity keeps the SBC and peripherals on for like 4h. That is without camera, motors and LIDAR. On a more serious workflow like mapping we can expect at least an hour of battery life,

Sensors

Ranging Sensors


Our robot should automatically avoid obstacles and bumping into objects. To achieve this, we’ll use two types of ranging sensors: Ultrasound and Time-of-Flight. Each has their own advantages and disadvantages.

Ultrasound sensors work by means of echolocation. They send an ultrasound beacon, which bounces back when it hits an object. Using the speed of sound, and the time it took for the sensor to receive the echo signal, we can compute the distance. These sensors have fields of view of around 15-20 degrees, millimeter accuracy, but have limited range.

Another drawback of Ultrasound sensors is that they can be fooled by soft surfaces as most of the sound energy is absorbed, and by angled surfaces as the echo might never reach the transducers.

A classic in hobby robotics, SR04 are cheap ultrasound sensors with respectable specifications such as 3cm - 200cm range, 15 degrees FoV and ±3mm accuracy. At around 3 euros a pop, these are the obvious choice for a low cost solution.

Time-of-flight sensors work in a very similar way, but use infrared laser beams to measure the distance to an object. Although less accurate, these sensors are much faster (speed of light vs speed of sound) and do not have issues with angled or soft surfaces.

VL53L1X is a cost effective implementation of such a sensor. It features a larger field of view compared to the SR04, and a range of up to 400cm. Additionally, these sensors have a programmable field of view of 4x4 regions, allowing us to modify the Region of Interest (ROI) to ignore the ground plane for example

To get the advantages of both types of sensors, we opted for a hybrid choice, with a VL53L1X in the middle and two SR04s on the sides.

The only drawback of this configuration is the blind spot indicated by the hatched area. However, we are not too concerned about that, as we are only interested in detecting objects from about 20cm, to ensure enough space for the robot to turn without hitting the object.

LIDAR


A LIDAR sensor is basically a spinning ToF sensor. This sensor turns at a certain frequency, and takes distance measurements at every angle.
These points are later used by the SLAM layer to turn it into meaningful maps and positioning information.

RPLidar is a 2D 360 degree LIDAR. While not as accurate as other alternatives, at around 100$ this is by far the most cost effective LIDAR sensor on the market.

It’s only drawback is that for maximum accuracy we only get a refresh frequency of 5Hz, which means the robot has to drive (very) slow while mapping.

Camera


The camera was a simple and obvious choice. The official Raspberry PI camera has everything we need for running neural network inference. For more complex applications, this camera can be swapped with an Intel RealSense, which also provides a depth stream.

To be able to follow objects, we mounted the camera on a servo motor to enable it to be tilted at any angle. The pan angle is handled by the movement of the robot platform.

IMU

Inerial Measurement Units are a fusion of a gyroscope, an accelerometer and optionally a magnetometer.

  • Gyroscopes measure angular rate.
  • Accelerometers measure force/acceleration
  • Magnetometers measure the magnetic field around the sensor

ROS supports a lot of IMU Fusion Filters, which take the data of these sensors and output the angle of the object on three axis.

Together with wheel encoder information, the filtered IMU data can greatly improve the accuracy of mapping.

My choice of IMU is the Adafruit MPU-6050. This is one of the cheapest sensors of it’s kind and I’ve had great results with it in an earlier ROS project.

Bill-of-materials

Item Vendor Store Unit Cost Units Total Cost
50:1 Micro Motor Pololu Pimoroni 5.01 2 10.2
Micro Motor Brackets Pololu Pimoroni 4.39 1 4.39
Hall-Effect Encoders Pololu Pimoroni 8.77 1 8.77
60mm Wheels Pololu Pimoroni 3.7 1 3.7
Ball Caster Pololu Pimoroni 2.90 2 5.8
Qwiic Motor Driver Sparkfun Pimoroni 15.21 1 15.21
Power Regulators Pololu Pimoroni 9.95 1 9.95
Nvidia Jetson Nano 2GB Nvidia Pimoroni 58.2 1 58.2
Battery Cell ECELL Reichelt 14.11 2 28.22
Pi Camera V2.1 Raspberry Pi Pimoroni 23.4 1 23.4
RPLidar A1M8 Slamtec Mouser 103 1 103
SR-04 Ultrasonic Sensor Adafruit Mouser 3.53 2 8.75
VL53L1X ToF Sensor Pimoroni Pimoroni 10.33 1 10.33
MPU-6050 IMU Adafruit Pimoroni 5.84 1 5.84
SG-90 9g Servo DFRobot Mouser 3.96 1 3.96
299.72

Note: Prices at Mouser are displayed without VAT, I added 24% to those items to make the BOM as accurate as possible.

To be continued …

In the next update, we’'ll take a look at some similar projects that inspired this build, and go trough the process of designing the frame, and preparing it for laser cutting.

Until then, here are a few renderings:

hello,

nice project, I think it will be hard to keep the hardware cost close to the target point considering the budget !
For slamming, cartographer is far simpler and superior than gmapping
For your motors controler, you need to check that it is able to handle your motors stall current, otherwise it will burn the first time the wheels will be blocked !

For the sonar, you not forget that the ranging is a cone ! So it need to be oriented slightly up otherwise you will get the ground each time. You can also have SR04 like that are on I2C that are much simplier to handle, and at the same target price.

For those that don’t want/cannot build their own frame, there are a lot a kit for indoor or outdoor rover for cheap in most shop. Here is an example of a small outdoor rover frame for 40€ on Amazon with 4 12v motors and wheels. It is aluminum so it is easy to make more holes on it to put screw for electronic, or use double side tape


(On the picture, it doesn’t have the Balena board but it is coming)

For power regulations, it is important to have lot of it as the motors will bring some noises.
Generally, lm2596 based board are very cheap and quite sufficient. It is better to put a lot of regular to have stable sensors and protect them! Better have the 10cents regular burn instead of the lidar !

Hey @khancyr, thanks for all the feedback.

  1. I haven’t worked with cartographer yet, I’ll give it a try, thanks.
  2. The motor controller supports up to 1.5A per channel and the stall current of the 50:1 motors is 920mA.
  3. That’s actually a great point, i’ll update the CAD design to compensate for that.
  4. Do you have a link to the I2C based SR04 ?

I have used a lot of LM2596 based boards, hundreds, and very few of them actually work properly. A few don’t work at all, but most of them do work but never reach the advertised 3A. I still use them for some projects, but I can’t recommend it. Even if it’s more expensive, for this build I opted for the Pololu - 5V, 3.2A Step-Down Voltage Regulator D36V28F5.

CAD and Fabrication

I’d love to be able to give you an overview of the steps I usually take when trying to design something, but honestly, I can’t. My workflow is usually extremely chaotic, unstructured, and horribly difficult to document.

So, while cleaning up the dozens of AutoCAD files related to this project, I decided to put some structure to the whole thing, and in the meantime create kind of a tutorial of how to not get there again. Maybe you’ll find this useful too.

1. Prepare

Okay, so we have created a BOM, we know what components to use and most importantly why we want to use them. Now it’s time to get familiar with them and understand their physical particularities. (eg. Jetson Nano’s large heatsink). It’s a good idea to just download all the datasheets, and maybe read some of the more important ones. As they say, RTFM !

By the end of each datasheet, you’ll find a dimensional drawing of the part. You basically want to bring the top (or bottom) section of that part into your CAD software of choice. There are a few ways of achieving that:

  • Some vendors have downloadable dimensional drawings in a format like DXF (eg. Raspberry Pi)
  • Re-draw the section yourself using basic drawing. It might take a while, but you’ll get better at working with your CAD software this way. Plus, you don’t have draw everything. Just focus of the essentials, footprint and mounting methods. (See RPLidar or Jetson Nano in the drawing)
  • A hack i’ve learned from my dad

How to copy dimensional drawings from datasheets

  1. Find datasheet or dimensional drawings in PDF. If needed, isolate the page and save it to another PDF.
    I’ll take the Pololu Micro Motors as an example. Luckily, they provide the mechanical drawings as a separate file.
  2. Look for PDF-to-DWG converters online. I had good results with autodwg and easypdf.
  3. Open the file in your cad software of choice. Use the scale tool to make sure the dimensions are correct.

Conversion is never perfect, for example text is not always correctly rendered. Still, this is more than usable after it’s scaled to the correct dimension. Feel free to delete details and simplify shapes as much as you can. It’s easier to just delete things than to draw them yourself :wink:

So once you have all of your parts in CAD format, create a new drawing file and place them in a configuration like this.

This will become your toolbox, think of them as stencils for what you are going to (virtually) build.

While at it, now it’s a good time to check the weights of your components. In this case, the heaviest that would be the batteries, RPLidar and Jetson Nano. You’ll need that later when placing them.

2. Research

Now is time to think about how you want to fabricate your design. Different fabrication methods require different approaches to bindings and mountings. In more advanced builds, and especially when 3D Printing, it’s crucial to know the properties of the material you’ll be using. For this build, we’ll use laser cutting with 3mm Plexiglas as it’s a cheap and readily available solution almost everywhere.

Laser cutting plexiglas is a two dimensional operation, meaning it cuts trough a sheet of a known thickness. However you can use box joints to mount things at 90 degrees. I use this a lot for in this design for the sensors, and camera mount.

Look for similar solutions

Look for projects that use the same components or that do the same thing. Try to see both the flaws and clever solutions, as you might need them later.

Probably the most important component in this design is the SBC, so I went and looked for solutions designed for the Nvidia Jetson Nano. Here is a list of the official robot kits that support the Nano. Some of them are variations on the same riff, but some designs did stand out:

Waveshare JetBot has some great features. Including a PCB that handles power, motor drivers and even includes battery charging. At 120$ the kit is also extremely affordable. There’s no obstacle detection, no possibility of adding a LIDAR, and no servo to tilt camera. Another disadvantage is it only supports the 4GB Jetson Nano Kit.

Other solutions on that list, seem like a variation of the same idea, but powered by USB Power Banks.

What I really like about the Pololu Romi is the simplicity of he design. Circular shape, differential drive, motors on the center, batteries accessible from underneath, are all features we are going to steal for our design. The only reason I’m not using this platform is that it only supports Raspberry Pi boards. Also, these days availability is not great.

Adding TurtleBot3 to the list has more of an aspirational role than anything else. This platform is on another level . The drivetrain is comprised of two Dynamixel servo motors that enable up to 23kg !! of load. It also features an advanced controller called the OpenCR and Waffle Pi also supports a a robot arm called OpenManipulator. It’s a really serious stuff.

In terms of functionality however, there’s nothing that the TurtleBot can do, and our platform can’t. Ok, apart from lifting 23 kilos. And the robot arm, which costs an additional 1499$.

Ok, we’ve seen enough. Time to get to the real work. This is going to be a lot of trial and error, and probably painful. So get your favorite inspirational quotes about perseverance ready, and let’s go.

3. Fool around. Find out.

Select an outline, in this case a circle, and try to fit the most possible components in the available space.

A few general rules to placing components:

  • Draw a line in the middle between your motors, try to distribute all the heavy components along that line.
  • Be mindful of heat, don’t place components that get hot next to batteries.
  • Think about wire management, add holes for wires to go through.

Oh, one more thing, I really recommend this great article about robot dynamics. Actually, that whole website is pure gold.

I won’t show you every step in the process, you either probably know how to do it, or can find much better resources. Instead, I’ll show you some milestones and some highlights.

Attempt 1

Here’s a first attempt, started from a circle, but then extended the shape as I needed more space. Wait, something is missing. Where’s the SBC ?

Attempt 2

Back to the drawing board. This time I decided that instead of extending the shape with a rectangle for more surface, I’d rather just increase the diameter. At this point I also decided I’ll use the Sparkfun Qwiic motor controller due to it’s smaller size. Most of the design decisions in this version made it to the prototype, for example the Jetson cutout, battery positioning, and LIDAR mount.

Attempt 3

One thing about the circular design is that there’s just a few millimeters of clearance between the wheels and the sheet of plexiglas. It’s fairly easy for debris or other little things from the floor to get stuck there and block the motors, that’s why I prefer having the wheels on the outside. So I tried to give the outline a facelift. However, I’ll probably fabricate the round version anyway.

At this point I also added a servo in the front for the camera mount.

I have no clue how I ended up with this shape, but it might have something to do with this robot platform I used to have some years ago.

Finally…

I realized that the pan servo was useless, as the robot itself needs to rotate anyway to be able to follow an object. Instead I designed a tilt mount for the camera using box joints. I also added mounts for the object detection sensors, and moved the battery holder to the top floor. This way, we have more room for the electronics on the top side, and you can change batteries if you flip the robot.

4. Extrude !

As we are fabricating the platform using laser cutting, this step is totally optional, but I recommend doing it. Mainly because this way you can check how objects fit on the Z axis without having to build anything. For example checking clearance between the Jetson Nano and LIDAR sensor, or seeing if the box joints on the camera mount fit together.

I usually start by extruding the main board along with the screw holes.

For some simple shapes, like the Micro Motors and the brackets, its enough to draw a rough version of the part. However for some other parts, like the Nano, it’s almost impossible, or at least very time consuming.

Grabcad is an incredible resource for 3D files. It’s an immense library of 3D objects. Think GitHub but for CAD. I was lucky enough to find most parts for this build including wheels, brackets, motors, ball caster, Jetson Nano, RPLidar

One of the goals of building a 3D model of the platform was to see how the parts in the camera mount would fit.

And the second one, was to check the clearance between the LIDAR and the heatsink.

A third advantage of having a full 3D model of our robot is that you can import it into Gazebo, ROS’ simulation environment, and have a virtual representation of your robot.

5. Prepare for Fabrication

In the case of laser cutting, there’s very little to do to prepare something for fabrication. FabLabs will usually give you a template file that corresponds to the working surface of their machine. Import it, and try to fit all the parts in that template.

Sometimes, it can happen that you draw a line on top of another line. That usually confuses laser machines and could result in over-burning the material. You’ll need to eliminate those lines. In AutoCAD there’s a command called overkill that does that automatically.

Kerf is the width of material that the process removes as it cuts through the plate. When designing box joints, make sure you remove about 0.1-0.2mm on every side of the “teeth” to compensate for it. If you don’t do that, you’ll probably have to use a file to make the parts fit.

Next up…

we’ll explore the build process, talk a bit about software, and discuss some of the mistakes and fail moments.

2 Likes

That is why I just said they are sufficient ! For they price they are nice enough and allow to make mistakes on first builds. But yes for a serios build it is better to invest in serious electronics. We always get what we pay for !

Here is a shop with the i2c sr04 HC-SR04 Ultrasonic Sensor - I2C . You could also have it from amazon or AliExpress or whatever generic electronic shop that have Arduino modules.

Thanks for the link for robotics 101. Here is another one with maybe more concrete examples and tips How to Build a Robot - SDR Wiki. They also have a nice shop but are very expensive outside USA.

2 Likes

So cool! I’m amazed at the ability to differentiate between a vase and a bottle.

1 Like

The robotics 101 link is great, I’ll actually edit the post and add this one too, thanks. And for the I2C based SR04, sadly it’s address is hard-coded, so you cannot just simply daisy chain them, but cool nonetheless.

Taking a step back…

Confession time. My build logs haven’t been exactly synced with my work on this project. The robot was mostly done and working by the point I started publishing these logs. That being said, I had about three weeks to play around with the platform and find it’s weaknesses:

  • Robot was fast, ridiculously fast sometimes, but not too powerful. Anything more than it’s own weight would make it crawl.
  • Ball casters were too heavy and didn’t provide enough ground clearance.
  • Very messy and uncomfortable wiring.
  • Too compact. Lack of extensibility. Where do we put extra stuff?
  • The whole battery system is a bit too expensive, and lacks charging functionality.

After that, I had a week of leave. Upon returning home, I decided the list above is enough to make me take a step back and consolidate the platform based on the things I’ve learned in the meantime.

Frame

It’s round now. Why ? More surface to add extra stuff, easier to hide wires, plus it kind of looks like a vinyl record, so that’s cool.

The most important factor that lead me to redesigning this was the fact the ball casters were kind of heavy and the robot would get stuck into different things very easily. It just felt like the drive-train was not sturdy enough for a an extensible robot platform. That’s why I moved to a tracked design based on this Pololu kit. This way the ground clearance has been increased to almost 3cm.

Additionally, the motors have been swapped with N20 motors, which provide more torque. Again, leaving some room for extension and maybe even some room to carry modest loads (more on that after some testing).

Electronics

Electronics was another topic that I wasn’t very happy with in the first iteration. Mostly the power circuit that was too expensive (±48 Euros), and lacked charging functionality. After reading @jmakivic 's build log, I realized I should give USB power packs another chance.

Even if theoretically the stall current of each motor is 1.5A, this shouldn’t be reached in normal circumstances and on a robot with obstacle detection functionality. Additionally the encoders can detect stalls easily and cut power to the drivers if needed. And on top of that most power pack circuits automatically shutdown on over-current.

In conclusion, I’m pretty confident that a power-pack can support the platform, even if theoretically it’s maximum consumption is higher than the 3 amps that power-packs usually support.

In my few days of testing, this didn’t seem to be a problem, but if it proves to be, there’s always the option of using two 5000mAh power packs each with their own regulators instead of one large 10000mAh. In terms of cost two power-packs are still cheaper than the previous power solution.

Anyway, this is how the block schematics of the system looks like:


Well it kind of looks like the metro map of a busy city, meaning we need a central station.
Starting from the positioning of all the components on the frame, I came up with this:

The 5-pin connectors on the upper side are for the three V53L1X ToF sensors.
The grouped connectors on the lower left side go to the Jetson Nano.
And on the right, there’s a servo connector, and the connector for the encoders.


The layout proved to be quite challenging, so I might upgrade this to a single sided PCB later.

And finally, here’s the first floor fully assembled, including the ToF distance sensors.

Testing

Before adding another layer on top that makes access to the mainboard and the other components impossible, it’s a good idea to connect to our SBC and test the components and make sure everything works.

First step is to test the VL53L1X ToF sensors:

So, 0x5D is the address of the Qwiic Motor Driver, and 0x29 is the default address of VL53L1X. However all three sensors have the same address by default.

Lucky, there’s a fix for that. Each sensor has a sleep/shutdown pin, and there’s a changeable address register, so we can put them to sleep in a sequence and change their addresses.
Here’s how that’s done.

Nice, we got three different I2C addresses on the bus. Let’s test the ranges the sensors actually report. Again, (very hacky) code here.


Nice, looks about right. What about the motors ?

Code for this test is available on Github.
Ok, seems like everything is working. Let’s get to the upper floor.

Upper Floor

Well this one is pretty straightforward. We need to mount the SBC, and of course the camera.


Something I can’t stress enough is how important wiring is if you want to be able to perform maintenance, upgrade or just generally fiddle around with your robot.

For convenience, I added the long holes on the sides of the frame for either screwing parts in, or using zip ties.

And here’s the two floors together.

It kind of looks a bit funny though. Looks like it’s missing something.


Now that’s more like it.

And here’s a couple of more angles of the full assembly.

Coming up…

I’ll show you some videos of and talk about the software used to bring this to life.

4 Likes

@dragomir It’s so cool to see the progress that you’re making on your robot! I really like the in depth descriptions of your trial and error process as you build this platform.