I THINK ∴ I'M DANGEROUS

Bob The Skull

What should go here is an explanation of what this project is and why it exists. Instead, I'll say this: Alexa should be dumber and more annoying.

Tests

Test 1

Hardware

  • Raspberry Pi Model B+ (what I had lying around)
  • Junk-bin USB-Wifi (dito)
  • Custom Pi-Hat to interface with/drive hardware
  • Some kind of not-too-shitty USB microphone

Datasheets

Cause I know I'll need 'em.

Outputs

All outputs need a 220 Ohm resistor as each Raspberry Pi GPIO can source 15mA @ 3.3V.

Name GPIO Pin Use mA Notes
Left Eye
Right Eye
Brain Light Thinking Indicator Implement with PWM so a pulsing/fading effect can be used
Panel LED 1
Panel LED 2
Panel LED 3
Panel LED 4
Mouth Blah blah blah 330mA @ 3.3V = 10 Ohm
Lighting

Inputs

Name GPIO Pin Use mA Notes
Button
Photoresistor presence / movement detection Can use to “smartly” add “oh hey there” / “hello” type messages

Software

Alpine Linux + a Go application to control the hardware. Various Go packages to interact with AVS (Alexa Voice Service).

Custom animation library for various responses.

DSP software to match mouth movements to audio output (very simplistic).

Crap I Have to Write

  • Wake Word Engine - Might be able to reuse the demo program from Amazon. It's C++ garbage.
  • AVS system - Basically the bridge between the hardware and AVS
  • Gesture / Animation Library - pre-defined routines for animating Bob
    • The DSP / Speaking mechanism should probably go here

Resuable Animations

  • Speaking
    • Mouth movement tied to audio
    • Medium brightness brain and eyes
  • Thinking
    • Pulsing Brain
    • Front LEDs randomly blink on and off one at a time
    • Eyes lit
  • Listening / At Attention
    • Brain Lit
    • Eyes Lit
  • Cheeky Emote
    • Mouth Hangs open
    • Winks one eye
  • Deeply Confused
    • Mouth stays mostly open (gaping gesture)
    • Eyes wink randomly
    • Brain blinks rapidly
    • Blink red/orange front panel LED