Refine
Document Type
- Conference Proceeding (4) (remove)
Conference Type
- Konferenzartikel (4)
Language
- English (4)
Has Fulltext
- no (4)
Is part of the Bibliography
- yes (4)
Keywords
- machine learning (4) (remove)
Institute
Open Access
- Open Access (4) (remove)
The paper describes the methodology and experimental results for revealing similarities in thermal dependencies of biases of accelerometers and gyroscopes from 250 inertial MEMS chips (MPU-9250). Temperature profiles were measured on an experimental setup with a Peltier element for temperature control. Classification of temperature curves was carried out with machine learning approach.
A perfect sensor should not have thermal dependency at all. Thus, only sensors inside the clusters with smaller dependency (smaller total temperature slopes) might be pre-selected for production of high accuracy inertial navigation modules. It was found that no unified thermal profile (“family” curve) exists for all sensors in a production batch. However, obviously, sensors might be grouped according to their parameters. Therefore, the temperature compensation profiles might be regressed for each group. 12 slope coefficients on 5 degrees temperature intervals from 0°C to +60°C were used as the features for the k-means++ clustering algorithm.
The minimum number of clusters for all sensors to be well separated from each other by bias thermal profiles in our case is 6. It was found by applying the elbow method. For each cluster a regression curve can be obtained.
Learning to Walk With Toes
(2020)
This paper explains how a model-free (with respect to the robot model and the behavior to learn) approach can facilitate learning to walk from scratch. It is applied to a simulated Nao robot with toes. Results show an improvement of 30% in speed compared to a model without toes and also compared to our model-based approach, but with less stability.
The use of artificial intelligence continues to impact a broad variety of domains, application areas, and people. However, interpretability, understandability, responsibility, accountability, and fairness of the algorithms' results - all crucial for increasing humans' trust into the systems - are still largely missing. The purpose of this seminar is to understand how these components factor into the holistic view of trust. Further, this seminar seeks to identify design guidelines and best practices for how to build interactive visualization systems to calibrate trust.
Artificial intelligence (AI), and in particular machine learning algorithms, are of increasing importance in many application areas but interpretability and understandability as well as responsibility, accountability, and fairness of the algorithms' results, all crucial for increasing the humans' trust into the systems, are still largely missing. Big industrial players, including Google, Microsoft, and Apple, have become aware of this gap and recently published their own guidelines for the use of AI in order to promote fairness, trust, interpretability, and other goals. Interactive visualization is one of the technologies that may help to increase trust in AI systems. During the seminar, we discussed the requirements for trustworthy AI systems as well as the technological possibilities provided by interactive visualizations to increase human trust in AI.