Linux controlled by Voice and hand gesture for AR ?

john2find

New Member
Joined
Aug 30, 2017
Messages
1
Reaction score
0
Credits
0
Hi Team,

I came across MS Halolens three years back and was completely left amazed with what future holds us for all.
Do we have any Linux open source project for AR ?

With this in mind, I have been preparing myself for creating a Linux , whose basic inputs will be hand gestures and voice command [sounds like Jarvis I know] with a bit of AI as a support assistant.

I have been reading various article on how I can utilize OpenGL without X11 and all.
I am almost noob, but have good hands on with Programming language.

But as it is my dream I am pursuing it.
 


I am unaware of any AR frameworks specifically for Linux. There are a number of computer vision libraries that could be extended to eventually support AR.

Perhaps Aruco + Ogre?
https://www.uco.es/investiga/grupos/ava/node/26
https://www.ogre3d.org/

And it is easier to use Aruco with OpenCV https://docs.opencv.org/3.4.1/d5/dae/tutorial_aruco_detection.html

Supposedly, these smart glasses are designed for Ubuntu: https://www.daqri.com/products/smart-glasses/

Technically, Android's AR Core runs on Linux as Android uses a Linux kernel.

As for hand gestures and voice control. I have seen hand gesture recognition using an Xbox 360 Kinect with ROS (Robot Operating System http://wiki.ros.org/kinect). For proper hand gesture control you would need to both recognize gestures as well as location. This would require Depth + Tracking + Computer Vision. It can be done of course, ROS can automatically turn the laser scan from a Kinect into a point cloud from which you can determine a hand location in 3D space assuming you can recognize a hand.

Another option is to use something like TensorFlow to recognize gestures but won't give you position tracking. An example of that: https://medium.com/ymedialabs-innov...nd-gesture-pose-using-tensorflow-30e83064e0ed

The biggest problem with these types of controls is processing power. Recognition of hand gestures is difficult.

There are also voice controls for Ubuntu though I don't know how well they are maintained: https://www.noobslab.com/2014/06/control-your-ubuntulinux-mint-system.html

A few command line based personal assistants have also popped up recently including Yoda. https://github.com/yoda-pa/yoda
 
That seems really exciting . Only a noob but still think this will be one of the core technologies of the future , more than the imagination of anyone. @john2find if you could let me know what are you working and if I can collaborate in any way . Got a long way ahead but will try my level best to help out and learn in the process
 
I had some (mostly old) notes around some projects, so this is more of a mixed bag of pieces, not sure they will help you, but just in case...:
Jasper (voice computing platform), Snips Natural Language Understanding, Pupil (eye tracking), simpleCV , ApertusVR (virtual/augmented reality engine), ARtoolkit

On personal assistant or home assistant:
Kalliope, OpenJarvis, Lucida (aka Sirius, a project using OpenCV; don't know which road they gone), JarvisAI (a project cooked on Facebook), Home Assistant, OpenHab...
 

Members online


Top