Machine Learning

Voice recognition and control system

Role

Ideation,
coding/prototyping

Timeline

June 2019
(3 weeks Project)
4th Semester

Tools

Javascript, p5.js,
Wekinator, Arduino
Websocket, Node.js
Processing, OSC

The goal of this project was to create a speech recognition and control system. It should detect the voice of a person (who is taking) and what this person is saying. This should enable a more personalized interaction between the system and the user. In this case, an ambient light should start a different mode depending on the person saying the trigger word.

01 Coding

Lucie Lucie

Speech Recognition

Speech Recognition with P5.js. When a text phrase is detected e.g. "light on", the web browser sends emit to Websocket. Websocket forwards speech data with OSC to processing.

Voice Recognition

The open framework tool MFCC detects the sound signal an sends it to Wekinator. The ML Tool Wekinator is used to learn the voice of different users for personalized light control. Wekinator must be trained to different voices, which should be used for personalized light control. MQTT is used for the connection between Processing and the ESP8266. The data is published on the topic "/client" in the local network.

02 Physical

p1
p2

Find the code on Github to rebuild it yourself and more technical information!

Github

Lucies Diamond

An interactive mirror experience

Smart Shopping Cart

Independence for wheelchair users

Github Mail Linkedin

©Valerie Grappendorf 2019 Impressum