With the (still new) “Living Objects Lab”, AI (artificial intelligence) is now integrated into teaching and research at KISD: Infrastructure, (research) space, supervision and above all many many great tools and devices are available to embed “intelligent” systems in physical objects. Through meaningful human-machine interfaces, these “objects” thus become “living objects”.
The information processing in AI systems is usually outsourced to powerful servers (in the “cloud”), while the object merely serves as an interface. Just recently, this information processing is sometimes also (pre)processed decentrally and locally on the end device (e.g. a microcontroller). This process is called Edge AI/Computing, derived from the term “edge” of the network/cloud.
In this course, we first deal with the term “AI” in general, separate ourselves from hypes and buzzword battles and learn how AI systems actually work “under the hood”. Then we look at the conceptual foundations of edge AI, talk about its pros and cons, and finally learn how to generate custom datasets, train custom AI models from sthe scratch, and finally apply them in an object.
Prerequisite for participation is some very basic experience in programming (What is a function? Is “2.5” an integer or float variable?). These can be acquired for example in the preceding course “Interactive Systems [Fundamentals]”.