TECH2.jpg

A combination of state of the art hardware with an innovative AI algorithm

Leveraging Deep Neural Networks for our Natural Language Processing and Computer Vision models

nlp.jpg

Significant breakthroughs in empowering computers to understand language
have led us to base on Natural Language Processing models.

 

To understand the posture, movements, and intentions of a human being,
we focus on Computer Vision models: 
Facial emotion recognition, Body
language & 
Object detection

With inclusion of AI, we triggered the feedbacks to the participants in real time so we could test the reaction time and efficiency instantly and flawlessly

The feedback design

The design process backed up with research, participatory design and numerous iterations resulted in a subtle animated visual language, aimed to explicitly engage and stimuli the user.
The decision of color, size, placement and movement was user tested

and modified accordingly. Each icon had specific target; turn taking, voice adjustments (increase or decrease the volume), avoiding repetitiveness, and ending a conversation. 

*Arrows' language in constantly improving and progressing with a support of relevant professional advisors.  

Dialogue | Turn taking

Raise Voice Tone

Avoid Repetitiveness

End conversation

Well done

technology.jpg

Research & Development

Microsoft’s HoloLens 2 was selected as the prototype hardware as it is comfortable and immersive mixed reality experience available, delivering our vision to the initial experiment. The device reliability, security, and scalability of cloud and AI services from Microsoft were handy when developing and testing the system.

Book a demo

Thanks for your interest!

Please type in personal details and a  message.

We'll reply as soon as possible.