I have seen the teef glove ofNavid Azodi and Thomas Pryor, like this :
and also seen this post which has been said about this kind of work problem :
Their six-page letter, which Padden passed along to the dean, points out how the SignAloud gloves—and all the sign-language translation gloves invented so far—misconstrue the nature of ASL (and other sign languages) by focusing on what the hands do. Key parts of the grammar of ASL include “raised or lowered eyebrows, a shift in the orientation of the signer’s torso, or a movement of the mouth,” reads the letter. “Even perfectly functioning gloves would not have access to facial expressions.” ASL consists of thousands of signs presented in sophisticated ways that have, so far, confounded reliable machine recognition. One challenge for machines is the complexity of ASL and other sign languages. Signs don’t appear like clearly delineated beads on a string; they bleed into one another in a process that linguists call “coarticulation” (where, for instance, a hand shape in one sign anticipates the shape or location of the following sign; this happens in words in spoken languages, too, where sounds can take on characteristics of adjacent ones). Another problem is the lack of large data sets of people signing that can be used to train machine-learning algorithms.
So i like to know what proper AI module do you know for doing the navid works better by adding for first step the geometrical position analyzing into it, i like to use popular AI blocks like Tensorfliw for doing this kind of analyzing by fast online and the modules which updated at the time by large community users.
Update:
I think, some Virtual reality analyzer for the position analyzing must be existing, so which one if popular and free to contributing with large community?
thanks for your attention.