My adventures with a Raspberry Pi and Arduino programming

Archive for the ‘Airbots’ Category

AJ Arduino pin usage

I need to ensure I have enough oomph in my hardware. For this I need to know that I have both enough PINs for all my hardware, and enough power for my application.

PINs in use on my Arduino Pro Mini

  • 0 Serial RX – SD card
  • 1 Serial TX – SD card
  • 2 INT0 – Software serial radio RX
  • 3 INT1 PWM – Ascent/descent
  • 4 T0 – Software serial radio TX
  • 5 T1 PWM – Forward / aft
  • 6 AIN0 PWM – Turn port/starboard
  • 7 AIN1 – free
  • 8 CLK0 – free
  • 9 OC1A PWM – free
  • 10 I2C SS PWM – free
  • 11 SPI MISO – free
  • 12 SPI MOSI PWM – free
  • 13 SPI SCK LED – free
  • 14 A0 ADC0 – free
  • 15 A1 ADC1 – free
  • 16 A2 ADC2 – free
  • 17 A3 ADC3 – free
  • 18 A4 ADC4 I2C SDA – I2C rangefinders x 6, 9 DoF sensor (accelerometer, gyro, magnetometer)
  • 19 A5 ADC5 I2C SCL – I2C rangefinders x 6, 9 DoF sensor (accelerometer, gyro, magnetometer)
  • A6 ADC6 Analog only – charging current
  • A7 ADC7 Analog only – low voltage

Power usage

Item Min Typ Max
Sharp 10cm-80cm sensors (6) ? 30 mA * 6 40 mA
 SparkFun I2C Rangefinders (6)  0.3 1.7 mA * 6  ?
 9 DoF sensor  Accel’ 0.4 mA (10Hz)

Gyro 6.5 mA

Magne’ 0.1mA

 Arduino (hi power mode)  0.5 mA  4.13 mA  4.13 mA
 Current monitor  0 (on arduino)
 Low voltage monitor  ? TBD
 SD card reader/writer  2 mA  5 mA  6 mA
 Radio tx/rx (BLE Nano)  4.7 mA  8.0 mA  11.8 mA
 Copter circuitry itself (max)  ?  3258 mA (3.3A)  ?
 Totals    Elec: 214.33
Copter: 3258
Total: 3473 mA
 

The above default power usage stats with a modified 2000mAh/3.7V battery give a fully operating flight time of at least 34.5 minutes.

We can probably halve the electronics’ power usage, but this would only get us to 35.6 minutes flight time – so what’s the point? We will have to test the actual flying at low speed to see how long the copter will fly itself around. I’m guessing the actual flight time will end up nearer an hour with the bigger 2000mAh battery I’ll be using.

Eventually if the bot ends up performing tasks without the rotors on (observation from a static point perhaps?) then looking at power savings in the electronics begins to make sense. As a bare minimum I’ll shut the sensors off when the bot is stationary. (landed)

AJ Airbot Project Introduction

Why an Airbot?

I love little helicopters and quadcopters. Never owned a Quad though, so thought I’d get a small one. It’s great to fly aircraft, but even better to programme them and watch them fly themselves! I wanted to create something that responded to basic stimuli and would work indoors – perfect for experimenting with aviation.

I love the idea of those little bugs you find in gadget stores – they jiggle around, follow the edges of a cage, and keep walking around. They’re simple but interesting as an academic exercise. I wanted to do something a little like this, but with quadcopter… I love aircraft!

Where to start?

I figured start with a very small nano quadcopter. This also means I can run everything at Arduino system voltage (around 3.3V), making the electronics easy. I’ve just bought a Hubsan X3 mini (7cm diameter) quadcopter. This can be used indoors or outdoors. I don’t want to re-invent the (very complex) wheel of flight controls – so I intend to retrofit this quad. I’ll let the onboard electronics worry about stability – all my code is going to do is say hover, up, down, forward, back, turn – just like the old logo robots from school!

Controlling the aircraft

This does imply a few things though. Firstly I need to interject an Arduino in to the lines from the control receiver to the flight control unit – so the arduino is saying forward/backward, not me. Shouldn’t be too hard (!)

After this I need to get basic controls working. I need the arduino to ‘know’ it has taken off, how high it is, and how to land again. The same goes for any movement. Thus the Arduino needs to know its acceleration in the x, y and z planes. This is something the flight controller may be doing anyway – if I can intercept these sensor readings then great, else I’ll have to add some more sensors for this.

Position

To know my height I need to know my acceleration for a period of time in the z plane. Same with x and y position. This means a basic inertial navigation system (INS) needs to be developed. This will help me find my landing pad again, and to ‘learn’ how to navigate around rooms. Basically provides a way of remembering walls in a room too.

Eventually I’ll want to add some IR distance sensors for fine input, and to sense obstacles, but initially knowing how far above the landing zone I am will suffice.

Learning about the environment

Flies respond simply to external stimuli. This is why they keep head butting windows. I want something simple, but not that simple. I could create a 1cm grid of an entire house, but this would be expensive in storage terms. My bot is 7cm in diameter. My likely proximity sensors have a 10cm minimum distance. If I assume a 10cmx10cmx10cm cube around my airbag then I can safely map a grid at 15cm resolution.

I can provide a 10cm safety zone around the aircraft – E.g. 10cm below, 7cm for the aircraft, and 10cm above. This is 27cm and so a point (a wall found to be) within 30cm means ‘I can’t fit there’ whereas a gap means ‘I can fit there’. So approximating  a room in to 15cm increments provides basic navigation. My front room where I’m typing now is approx 4m x 5m x 2.2m – which means approx 27 x 34 x 15 data points, so 13770 data points, each of 1 bit, so 1.68 KB of data. Not too bad at all. I could even simplify this eventually by storing planes for walls with four lines for boundaries, dropping the storage required significantly.

Autonomous flight

Being able to fly itself also implies it must manage it’s own health. For airbots this means battery usage. I’ll use a SparkFun solar buddy and a panel to charge the unit. I’ll also use the trick mentioned on it’s tutorial page to measure the current flow to the battery (and thus if it is not full), and if I can a ‘low current’ warning too (nearly empty).

I can use this information as ‘fear’ stimuli – in this case to find a location to sit still that provides a high charge rate – i.e. somewhere to sit in the sun. Just like my Labrador does (although for different reasons!)

Other ‘health’ and ‘fear’ indicators will also be useful. Houses are full of things that move. Thus not only will my room learning code need to figure out what is fixed versus what is moveable (accomplished by ‘number of observations’ of ‘wall here’ at a particular location), but it will also need a short term ‘collision avoidance’ mechanism.

The Science Museum has some simple toys that do this. They hover and are moved by you bringing your hands near them, and they steer themselves clear. I’ll do something similar. Useful in particular for doorways, labradors, and crap sitting in my office.

Behaviour and flight planning

As you can tell from the above, the bot will need several types of behaviour. Here’s a typical flight:-

  1. AJ is stationary in the ‘idle’ mode on a desk.
  2. AJ decides “I’m bored” and chooses to ‘take off’ (transition from idle to flying states)
  3. Once this manoeuvre is complete, AJ randomly chooses to ‘explore’ (transition from flying to exploring states)
    1. The explore state’s ‘planning’ step is invoked. This chooses a direction to go in that does not interfere with known ‘local’ hazards (like known walls, desks, etc.)
    2. This pushes a ‘turn’ then a ‘forward’ and a ‘ascent/descent rate’ action on to the activity list (in this order)
    3. This action also then pushes the ‘navigate near to location’ action on to the list (at the end)
    4. When nearby, the final action ‘hover’ is invoked.
    5. The action list is now empty, so explore’s transition chooser determines what to do next.
  4. Depending on how ‘adventurous’ AJ has been told to be determines what percentage chance AJ has of continuing to explore.  (i.e. transitioning state to the same state) Say it’s 80%, and for our purposes he chooses to explore again
    1. We invoke ‘planning’, which in turn adds a turn, forward, ascent/descent rate, navigate near to, and hover command on to the action plan list
    2. Whilst in the navigate to action a sensor determines that the target is 40cm away, but a wall is 30cm away.
    3. This basic instinct causes an ‘avoid’ flight action to be executed, and a new ‘explore new boundary’ phase to be entered in to. (This may or may not ‘throw away’ the original explore command – probably easier if it does)
      1. This implies that an ‘explore space’ state with an ‘avoid’ instinct stimuli means we transition to an ‘explore new boundary’ state
  5. Exploring a boundary. Here we need to determine what line the boundary follows – at the moment we only have a point on it observed.
    1. We first ‘observe’ the boundary to determine if it is static. If not  the ‘avoid’ stimuli will again fire (no problem), or the object will move out of the way, in which case we transition back to the explore state originally found.
    2. If stationary in the hover however, and the obstacle doesn’t move, we ‘peak’ left and right, using the direction calculator and INS readings to determine the likely line of the ‘wall’ (or other obstacle). We then plot a flight plan along this obstacle (turn, forward (slowly?), and no ascent/descent)
    3. As we move along the wall we record that it is still there. If we find an obstacle in the direction of flight, we log a new obstacle and follow that boundary again.
    4. If we don’t then after a while we’ll be in an extended line far away from ‘known space’. The ‘center of gravity’ of the bots nearest known positions should increase our ‘fear’. If this becomes high, then we should increase or decrease altitude and move back along our obstacle (hover, turn, ascend, hover, forward)
    5. If we encounter somewhere we’ve been before, again we’ll change height and move back along the obstacle. This time we’ll get a little further because there are more points in the extension, and thus our fear allows us to move further. Thus over time we’ll incrementally explore our environment.
      1. This ‘fear of the unknown space’ should be a basic instinct – if it hits a certain amount we should ‘return to base’ – e.g. if I’ve accidentally ended up outside – we should retrace our steps to the ‘last known safe location’ Failing this, navigate home directly.
  6. Eventually our ‘low battery’/hungry stimuli will fire, causing us to find a known nice charging point (a place we’ve had high recharge at before) or return to home (last resort)
  7. A ‘tiredness’ stimuli may fire after charging even, and cause the bot to return home. Much like my labrador does after eating, drinking after a long walk – he goes and lays down.

Long distance flight planning

Rooms are weird shapes. As are cities and the ground. This is why aviation has developed flight lanes. An aircraft moves to a flight lane to go beyond a local distance. By remembering ‘navigation friendly’ intersecting boxes we can provide a safe and finite way to plan routes that take in to account complex urban and in house environments. This will effectively map the centre of a room and it’s doorways. A simple cuboid we’re allowed to fly in to.

This means ‘take me home, I’m scared!’ can allow me to fly back to me last safe position, then to my last safe navigation. From here I can use navigations to plan a route to the safe navigation nearest my home position (and without a wall in between!).

This happily is a simple directed graph problem to solve. I count the minimum distance between centre points of navigations that intersect. So when entering room A through doorway 1 and evaluating the routes via doorway 2, I count the ‘cost’ of the manoeuvre by calculating the distance from the centre point of Room A’s intersection with door 1 and Room A’s intersection with doorway 2. Easy planar mathematics using Pythagoras.

I can even weight route choices by the quality of a navigation – it’s volume. A higher volume is more like a highway. These should be preferred routes. It also means a likely outcome is a ‘shortcut’ – going out the kitchen window, up the side of the building, and in to the office window in my house rather than flying up the stairs!

Memory

As we’re flying about we should try and remember things we find along the way. This should include areas where temporary obstacles aren’t found that often (like people, dogs, etc.). Knowing this provides the ability to find ‘rest spots’ or even ‘hide holes’. Useful in an urban environment.

Remembering temperature is also useful. A cold temperature means poor battery life, whereas a warm temperature means overheating. Each of these things can have its own memory, or feed in to a general ‘fear factor’ of particular locations. Fearing the dog’s room for example is quite useful.

Basics complete

Once the above is done we will have an autonomous super-fly that is capable of seeing to its own dietary and survival needs, and mastering its environment. This is a minimum viable product really, to which additional features and behaviours can be added.

Things needed

Clearly proximity, temperature, current, low voltage sensors are all needed. As is a state machine with simple static transitions mapped out. Shouldn’t be too difficult to build a super fly from a quadcopter!

What is not needed for this simple world is artificial intelligence – there is no ‘learning’ in the AI sense here – only ‘mapping’ an environment. Intelligence has an edge because it can be put in to new environments and use past experiences to help with decisions. This is not what I’m proposing. I’m proposing learning one fixed environment provided by the bot’s creator. Hence no neural nets or anything else. There’s nothing stopping you adding those instead of a state machine though.

Future extensions

To turn a fly in to a useful ‘pet’ or ‘companion’ a level of more behaviours (not intelligence necessarily) is needed. The ability to recognise people and communicate with them. LEDs and a simple sound generator, and the ability to listen to it’s own name and commands, much like a dog does.

Some commands are quite easy and desirable:-

  • Bedtime – sends to sleep when you’re working on a project / conference call
  • Go to ‘x’ – a named navigation space? remember 10 ‘words’ heard when in a particular navigation space perhaps?
  • Follow me – watch me (person speaking) and follow on
  • Watch me – record video for posterity, doesn’t necessarily imply follow me
  • Stay – hold position, but not necessarily attitude (so stay and watch me can be combined)
  • Find Person X – navigate through navigations to within voice print of a particular person. Perhaps combined with last known location of that person’s voice, or recent locations you’ve heard them
  • Come – come near the speaker
  • AJ – pre-command authentication – only responds to top 5 people it’s been near
  • Go with X – adds person X to ‘owner’ list temporarily (like going with an auntie for a day)
  • Lay down – land at current position
  • move left/right/forward/back – fine positioning before landing.
  • AJ you’re at ‘environment x’ in ‘navigation y’ – when first activated, so able to come with you to new places, e.g. climbing crags.