Machine learning, automation, big data, cognitive computing and deep learning, these are all different levels of Artificial Intelligence (AI). AI is clearly going to have a big impact on our daily lives. Therefore it is important that we get the humanistic aspects of the technology right. We need to study and redesign the ways people interact with AI systems and make sure we develop technologies that are compatible with our society’s core values. How can we build and design systems that benefit and augment people and create real value for real people?
Balancing AI efficiency with a genuine, healthy and fair human collaboration at work, home and within our community, will be one of biggest challenges of our society. Leaps Lab helps individuals and teams learn to leverage their human capacities and well-being in an increasingly AI social life dynamics. Luis Bohorquez, founder of Leaps Lab, is a proclaimed ‘Keep it Human’ advocate. How can we design AI solutions that foster people’s power and well-being and how can we leap people into a positive social future with intelligent machines?
AI is dramatically changing business, and chatbots – fueled by AI – are becoming a viable customer service channel. At Embot, Fadoa Schurer and her business partner Serge Cornelissen create Dutch AI conversation technology to achieve a human-friendly dialogue with digital services. Inside the AI of a chatbot is machine learning and what’s known as natural-language processing (NLP), making chatbots more intuitive, accessible and efficient. Human agents and chatbots ideally work together to improve the customer experience. From simple recommender algorithms to more complex consumer recommendations, how is interactive AI becoming the new standard for customer service?
Virginia Dignum has more than 20 years of experience in the field of AI. She is Associate Professor of Social Artificial Intelligence at the Faculty of Technology Policy and Management at TU Delft, and her research focuses a.o. on a value-sensitive design of intelligent system and the interaction between people and intelligent systems and teams. As our perception of AI moves from it being a tool to being a teammate, it makes obvious that we need to rethink responsibility. What means are needed to integrate moral, societal and legal values with technological developments in AI, both during the design process as well as part of the deliberation algorithms employed by these systems?
As soon as it works, no one calls it AI anymore.
John McCarthy, 1956