1. How does audEERING® recognize diseases based on voice?

It has been scientifically proven that many psychological and physiological diseases affect the human voice. These changes may not be distinguishable to a human but could be detected by machines. At audEERING® we try to find voice biomarkers that are affected by the disorder or disease. These biomarkers can be used to infer whether a person has symptoms of a specific illness or disorder. However, we need to be clear that we do not diagnose diseases, but we want to give tools to doctors to use alongside more traditional diagnostic methods. 

2. How is the technology supposed to be utilized to impart empathy to humanoid robots?

The technology enables humanoid robots to understand and respond to human vocal expressions, much like humans do. By teaching robots to recognize those expressions, such as when someone is feeling down, they can adjust their interactions accordingly, e.g. speak slower or with a happier vocal expression. This allows for more empathic responses and enhances the robot’s ability to connect with people on a personal level.

3. What does the company hope to achieve from this?

We aim to improve how humans and machines interact by making the interaction more human-like. We believe the key to achieving this is by empowering machines to better understand human behavior and vocal expressions. We’re optimistic about this approach, as we believe it will ultimately benefit everyone involved.

Studies have shown that people have a more positive outlook on life, both in health and in happiness, when they are regularly communicating, even if it isn’t with another human. 

4. Will robots be used in healthcare and practices in the future?

Certainly! One of the major challenges facing healthcare systems across Europe is the shortage of medical staff of various specialties while demographers are getting increasingly older. This creates a care gap. Meanwhile, we’re seeing a growing presence of robots in our world. It’s certain that we’ll see more robots involved in healthcare in the future. These could for example be used to alleviate the workload of human care workers by performing routine tasks. 

However, it’s not just about physical robots; digital assistants like avatars or even voice bots can also play significant roles in healthcare settings where a physical robot isn’t really necessary.

5. Will voice-based recognition of health changes become standard?

Voice analysis technology is highly scalable and has very minimal hardware requirements – any smartphone works – so widespread use will be extremely accessible. In the long run, I see that voice analysis will become another symptom checker in the clinics, used by professionals to verify and cross-check other diagnostics.

6. Could I speak into a device at home that analyzes my voice and makes a prognosis?

As mentioned before, this is a futuristic scenario. Active voice recording, where a user knows when the recording is started and ends, can already be used to give some advice to the users about their health status and provide recommendations to visit a doctor. Constant health surveillance is unlikely for reasons of data protection.

7. What diseases cannot be detected through the technology?

Any diseases that do not affect the voice cannot be detected – e.g. Voice AI cannot detect a broken leg. What it can detect is the level of pain that a brooking gives you. So even for strictly physical illnesses without vocal affect, there is still use for the technology, e.g. for pain management. 

8. How secure is disease detection through speech?

Secure or reliable? Security relates to the technology implementation side, where the data storage and transfer should be secured and anonymised.

Being reliable means how much we can trust a model that predicts biomarkers or how confident a model is about its decision. In this case, there are ways to measure these criteria. To improve it, we need to train models on larger amounts of data from many people with different diseases. We must also be careful about the cost of false negative predictions. Giving negative prediction for a disease may cause ignorance to act on it and to visit a doctor. In general, if a model provides predictions, it should also provide a confidence level that tells a user up to what extent they can trust the predictions. Any medical decision should be verified by a qualified healthcare professional.

9. How long did it take to develop the software?

The openSMILE toolkit has been developed for almost 20 years, starting with a university research group at TU München. audEERING® was founded in 2012 as a spin-off from Technical University in Munich. So basically after years of scientific work at the university we continued developing the technology for more than 10 years now and we are constantly improving on it. devAIce® and AI SoundLab for example were launched about four years ago, but the research behind it was a long process. 

10. Where is Voice AI already being applied in the healthcare sector?

In general, in the health sector, already there are physical devices in the speech pathology area. These devices provide clues for a better treatment. Our technology is currently being used for clinical studies, tests, screenings and in research. audEERING’s devAIce® technology is implemented in the iMotions research platform, along with other trackers of human behaviour for scientific studies. 

We have recently announced a collaboration with Navel robotics, who develop social robots. 

11. Which devices is the technology compatible with?

Our technology boasts broad compatibility, supporting a wide range of devices across various ecosystems. Through our devAIce® Web API, we provide a cloud-based solution enabling access from any device. 
Alternatively, our on-premise solution devAIce®  SDK empowers computation on a spectrum of devices, ranging from robust servers to mobile devices and embedded platforms. Our support extends across major operating systems including Windows, macOS, Linux, iOS, and Android, ensuring seamless integration regardless of the platform. Essentially, it can be used with any recording device.

12. What about data protection? Do regulations hinder the use of the technology?

Data protection holds significant importance within the EU, and it’s imperative that it remains like that. However, the EU is falling behind some other countries in AI development. At audEERING®, we recognize the gravity of this issue and are already fully prepared for the new regulations introduced by the AI Act. We are committed to ensuring the fairness of our models and regulatory standards.

However, the regulations can stifle innovation. We need large quantities of data to research innovative ideas and to scientifically validate our findings. Over-regulation from the EU could mean that research takes longer or is simply not possible to conduct.