The interplay of design, data, and human-centricity in shaping AI’s future
Looking at AI only from a tech point of view would mean overlooking a great part of this complex development. Even though the intricacy of coding takes a toll on the discussion, a diversity of elements interacting with this technology should not be left out when considering the implications of AI in business, in people’s lives, and society as we know it.
There are clear constraints for businesses in general to follow the neverending trends. Diving into new technologies blindly will quickly drain the time and money of organisations, leading to shutting down investment. As a promised quick win, perfuming flawed and obsolete processes with a shining new layer of generative AI is usually a waste. Therefore, Koos focuses on making digital life more human – leveraging AI when needed to create value, generating a positive impact for the humans involved and society as a whole.
To first explore AI, it is important to start from the basics of it – data. Coding, data science and machine learning bring extremely valuable quantitative data to the table providing access to insights like never before. These are crucial and valuable resources for digitalisation and decision-making but do not paint the full picture. While quant can give us trends and objective correlations, there is a more subjective and causal side of data that can only come from in-depth qualitative research and that complements the hard-numbers picture. In this perfect match, lies the sweet spot for innovation. When qualitative and quantitative data can talk, they balance each other and can achieve way more than they can alone.
How to reach the innovation’s sweet spot?
Designing AI is about how we can use AI as a valuable tool when it makes sense. Analysing it from a technical, business and customer point of view, while bulletproofing it from potential negative impacts. To think about the role of Design and Innovation in AI means understanding the process of making digital life more human as well.
Designing better AI outcomes
Ensure that the AI being developed is needed to solve real user problems and that it is ethical
Providing a better context for AI
Ensure that it is embedded in a solid digital innovation strategy and that the teams are aware of their roles and aligned on ways of working
While zooming in on the scenarios, we have to consider four spheres to provide both better outcomes and context for AI development:
At the core is the system itself, which is the main end goal of the development process and should be grounded on real and impactful problems to be solved. In this blog, we treat it mostly as a black box, and always leave the intricate technical details for the machine learning experts… They know it better than us!
Around this system, several teams make the magic happen, and a variety of stakeholders invent, manage or even ‘sell’ the use of this system to land within the organisation and to its potential users. In big corporations, strategizers ought to design a coherent plan of action to ensure this AI system is used to lead to a positive impact and sustain customer value in the long term.
Enveloping all these is the organisational level, related to strategy, ways of working and behaviours that guide the organisation (and all its products and services) forward. Here we are at a higher management level, responsible for ensuring that investment in AI is driven by a solid vision and strategy, and not only a single effort with a low budget.
Around it all, ethics plays an increasingly important role in it all as we see the impacts of AI in our daily lives and how it is shaping our society. For good or for bad.
Let’s have a deeper dive into each one.
Sphere 1: AI-System
In the “black-box” core, we find the AI system itself to be the product, for instance, ChatGPT or Spotify music recommendation. When developing new solutions in this first sphere, key questions to be answered are: “Are we solving the right problem at the right time?”, “Are we solving the problem right, or just pushing AI because everyone is?”.
Now and then, an emergent technology knocks on our door with the promise of being the cutting-edge solution we have long waited for. We have been there with Metaverse, Blockchain, NFT, etc. We get excited about these bright new promises and cannot wait to use them to address business issues and solve all the problems their advocates claim they can solve. A key decision moment lies in separating marketing buzz from real opportunities to increase the value delivered to your customers.
When thinking about the AI system itself, it is important to always consider that every digital experience is somehow a physical experience as well. Even with AI, there is a development part in which humans code and, of course, someone will use or be impacted by it on the other end. Some components are digital and others are not, but the human part of technology is inevitably present.
From that standpoint, we start a change of perspective from what once was a tech-driven approach. So, before thinking about AI models and coding, jumping into the black screens, Koos flips the scene to a human-centric point of view.
First of all, it is necessary to understand the real pains, needs and hopes of each stakeholder who might be impacted, as well as the roots of the problems they are experiencing. This will point you towards real value delivery and inform you how to create solutions addressing their main concerns.
Once the human-centric foundation is set, we move to spot necessary improvements and a course of action that can create real value. Instead of jumping headfirst into using AI to automate things, we can start thinking about the human, their needs and feelings, and thus how AI could address them.
In this mindset, we always start developing solutions by their desirability, which means understanding what people need. Only then can we move into other aspects of reaching the innovation sweet spot: looking if we can make use of the technology we have to address these needs (feasibility), understanding how we can make a sustainable business model out of this (viability) and how this proposed solution may impact our society and environment from an ethical standpoint. Of course, along with developing new solutions, these 4 lenses are constantly revisited and combined, but it all starts with human needs.
The gap between Design and AI lies in a divergence of approaches that, once addressed, can become complementary.
On the one hand, although the typical Design approach starts with desirability, it can fall short of translating these needs into tangible value in the end. By stopping too soon in the process and staying only on a very desirable wireframe or prototype.
On the other hand, AI development can be resource-intensive and aim for the wrong problem, being very feasibility-focused and disregarding the value to be generated to their users. Mutual support helps turn ethereal concepts into tangible AI solutions. By using design methods, we can better understand real customer problems, as this approach improves the tech scope and ensures technology has a more impactful application
Design. AI. What if we just combine the two of them?
By combining our design approach with the power of AI, Koos suggests an open and symbiotic collaboration between the well-known Double Diamond and the CRISP-DM methodology used to develop AI and data mining systems.
Our first step in this combination is a broad understanding phase that needs to be solidly grounded in three investigations: customer (what people need), data (what we have available or can collect) and business (how this helps us survive in the market). Only with sufficient knowledge of these aspects, we can answer the inevitable question: “Is AI the best approach to solve the problem we identified?”. Once there is an answer, AI will be explored consciously and not pushed because of its trendy appeal.
Afterwards comes the imagine phase. At this stage, designers and innovation experts can develop a concept, prototype screens, processes and information architecture and test or propose solutions, while developers and engineers start crunching and cleaning data and preparing the tech infrastructure. Within these double lanes of desirability and feasibility, we build a common ground of interaction, combining Design & AI.
Once our main risks and assumptions are resolved, building a functional model for technical evaluation and customer validation is important. We should approach this as iterative cycles to collect qualitative and quantitative information. This process helps us continuously improve and gradually increase our prototype’s maturity up to an MVP ready for launching. During this phase, it is mostly recommended to assemble agile teams for rapid and effective feedback progress.
What happens when you are solving the wrong problem?
Focusing on the technical challenges instead of real value to the customer might bring a lot of self-fulfilment for the internal tech team but frustration for the end users.
The OLX data science team identified a potential pain point on the platform’s catalogue for second-hand vehicles: a lot of text inside product pictures (which could also be potentially illegal or harmful information). An AI algorithm was developed to remove these texts and leave a cleaner image. The launch of this feature led to customer complaints due to the removal of branding icons and even contact numbers, highlighting the importance of understanding customer needs and problems before implementing AI solutions. Curious to learn more about this study case? Read the AI-by-Design white paper by Serena Westra and Ioannis Zempekaki.
Sphere 2: Team
Developing and implementing AI is not simple, it requires a lot of people and effort. If the right professionals are not involved within the right context, the outcome of the conjoint actions will not reach the aimed results.
It is not surprising to find teams composed of professionals with similar backgrounds and experiences. Although they carry the right titles for team building, they won’t necessarily perform in synchronicity. Alone in their hubs, individuals can become too focused on their expertise instead of acting multidisciplinary towards the same vision.
Even with all the dailies, syncs, alignments and follow-ups, it is not uncommon for the different teams to lose the overview of where the ship is heading. Poor communication between peers, throwing instructions ‘over the fence’, assuming the others have the same understanding as you, and low visibility of each other’s roles and tasks will eventually lead to misalignments and rework. Where sentences such as “that does not match the original concept” and “this is not possible to be done” take the stage centre.
Another challenge is paying attention to timing. Determining the appropriate moment to involve the right internal and external stakeholders, so the process moves forward and avoids retrocesses. Koos believes the teams should work collaboratively and transparently, building bridges of communication and taking into account each other’s points of view in the decision-making process. This also requires some key people to be involved at the right moment so that knowledge won’t get lost as we go, and that the transition from one team to the next will run smoothly.
A Human-centred point of view, involving multiple stakeholders and qualitative research is the perfect match for technical knowledge, with multiple visions on feasibility and quantitative data.
For a collaborative process to work effectively, it is crucial to establish specific agreements and adopt a common language to decode jargon or specialised words from each area. Frequently, people argue about an issue, yet they might be expressing similar ideas in a different way. Additionally, creating a playbook helps identify intersections in the processes, highlighting how they complement each other, and the strengths different teams bring to each step of the journey. Understanding respective domains, implementing collaboration practices, and defining clear roles and responsibilities is crucial to ensure accountability.
Speaking from experience, we joined forces with Galp, a Portuguese supplier and trader of oil, gas & power, to outline and agree upon a strategy to provide guidance and coordinate data-related activities within the organisation. The main goal was to create an efficient flow that would continuously address stakeholders’ needs when making data available by orchestrating the different actors in this journey. Want to dive deeper into our partnership with GALP?
Sphere 3: Digital Innovation Strategy
This whole effort of developing or improving an AI system, bringing the user perspective in, and aligning teams could be seen as just a project. But, we assure you it can be way more impactful if you can go further and embed it within the organisation’s strategy. As the digital innovation maturity grows, more impact can be made and the structure to support the development of data-related solutions becomes more robust. After all, to build AI systems we need data, and to have meaningful data we need a commitment from all levels and areas, where everyone perceives the value data-driven solutions can bring.
When looking from an organisational perspective, other factors start to play a role including the overarching Digital Innovation Strategy. Combining MIT’s 4-cap model with Koos’ long experience transforming organisations, we can outline the main capabilities when promoting Digital Innovation Leadership. Each step on this journey includes a list of tools for deep diving, with one main line of reasoning for all: they must be simultaneously explored top-down and bottom-up inside an organisation to transform its culture effectively.
Visioning with Customer centricity and Value Proposition
Deal with crafting a clear and compelling picture of the desired future. This is inherently linked to understanding customer needs, desires, and aspirations, but also to distilling the future vision into Objectives and Key Results (OKRs) that will guide you forward. This part is related to the future, the main areas to focus on and of interest. Of course, at this stage and level, it is easier to consider AI and visualise how it can add new value propositions and be integrated into our future vision.
Where do we want to go, what are the OKRs we are trying to reach and what impact are we looking for?
Sensemaking with Leadership and Foresight
Digital Transformation requires a deep understanding of the digital landscape, the organisation’s position, and external factors to navigate complexity and uncertainty.
How can I, as a leader, think about the digital not only as a tool but as a culture inside the organisation?
To find answers in a complex situation, it is essential first to understand where you are and where you want to go. Gathering relevant information about the main internal issues, your current approach to serving clients, and even key market trends that could have an impact is fundamental to designing a solid and realistic plan moving forward affect are crucial to designing a solid and realistic plan forward.
Relating through Skills, Culture & Synergy
Build meaningful connections. Speaking the same language, understanding the company’s direction, and utilising collective strengths. Promote a unified front, where everyone works together towards shared objectives.
How might we expand and divide this knowledge within the several areas of the company?
Certainly, painting a well-rounded picture of the future is important, but putting it into practice is often the real challenge, especially in big organisations. To manage change at all levels, it is important that leadership sets the example and clearly points the way forward. However, it all has to be matched with motivated teams equipped with enough resources, that know their role and what is expected of them. This means openly communicating to different teams their part in the whole, setting reasonable and actionable goals, and providing upskilling when needed.
Innovating & Implementing through Execution, Digital Operations & Technology
Bring vision to life by executing envisioned goals across all levels. Different levels of action need to be in place to transform vision into reality, from robust digital and tech operations to long-term horizon innovation teams. This ensures efficient execution, integration, and operational agility.
What are the structures to be placed in the organisation so all the theoretical ideas above are put into practice?
Each organisation may require different “engines” to move forward. In our experience, at least three components are needed to make it work. The first two are related to exploiting current value offerings: the data engine focuses on digitalising processes and developing new incremental digital solutions, and the experience engine focuses on assessing customers’ experience and finding opportunities inside current journeys. Both engines feed each other, the experience one helps prioritise the data initiatives, and the new digital solutions developed in the data engine impacts the end customer experience.
On another level, the new horizons engine goal is to explore mid to long-term opportunities for innovation. As we all know, if we only improve what we currently offer we are bound to lose our place to other players that better fit the changing customer needs and technological advancements. This radical innovation engine is responsible for ensuring your organisation’s survival in the long run and is inherently experimental and future-focused.
Digitalisation demands work, and you need qualitative and quantitative information to bring to life a realistic (and optimistic) future. Development and AI cannot be seen as incidental projects but as part of the digital culture itself.
To exemplify, we have collaborated with a +100-year-old company, with almost 100 thousand employees spread in offices globally. The innovation challenges Koos encountered in this project were not only related to its long-term tradition. The business itself is related to risk avoidance, which translates into a mindset that isn’t favourable for digital innovation. To orchestrate this ecosystem, we put our efforts into understanding the digital landscape and interpreting it into a clear way forward using Innovation Thesis and a structured path through an Innovation Playbook Want to know the step-by-step process of this strategy for a Fortune 500 tech organisation?
Sphere 4: Building AI with human-centred ethics
This is probably one of the most complex and currently discussed topics regarding AI. The main questions lie on the boundaries that must be defined for AI to guarantee ethics and a positive impact on our society, environment and planet. There are a lot of grey zones and issues to be considered when deciding how AI should play a role in human activities. For instance, a lot of buzz is arising from AI-generated images and videos, how this impacts artists from an aesthetic standpoint and how these Generative AI algorithms are trained.
Since AI systems are trained with data created by humans, they inevitably mirror human behaviours and, as a result, also replicate our social biases. When building these systems, we must also consider the negative implications they may have, how data may be socially unbalanced or biased, and over time have control over how these systems are changing or affecting our society.
Behind the screens, AI is already regarded with suspicion and can be seen as untrustworthy by users. Understanding what happens in the black box is not only a matter of control but of transparency and awareness of biased judgment. The main concern should not be solely about AI making mistakes. The real risk we face is the possibility of AI misleading us into thinking something is correct, even when it is not.
Regarding the ethics of using AI, we should not only reflect on the output generated but the data that is feeding it as well. Since AI models are generated from large gatherings of information (not always allowed by the content creator), authorship and copyright become an ethical crossroads. For productivity terms, AI is a great help but it can also replace human labour. How can we deal with that without opening doors for some kind of societal breakdown?
The rise of powerful AI will be either the best or the worst thing ever to happen to humanity. We do not yet know which.
Bridging the gap between Design and AI
Well, if you got here, you are either very interested and should reach out to us to continue the discussion of this highly relevant topic in our current world, or you are searching for a “too long; didn’t read” summary. Either way, diving deeper into how we design our AI systems today is key for a brighter future. We can see in our daily lives the potential AI has to improve it, but more and more we uncover the side-effects it may cause on our society. So, let’s talk more! For our fields to live in symbiosis, we need to share our knowledge, experience and capabilities. In particular, ethics in AI is still in need of deeper discussion.
Overall, AI is way more than black screens. Coding and modelling are an essential part of it, but for digitalisation to happen we need to take into account other systemic factors. AI needs design as much as design needs AI. Qual and quant data are two sides of the same coin, and together we have a much greater potential to actively promote change and deliver value to customers.
In conclusion, designing AI should focus on the technical aspects while considering its impact on people and society. It is about using AI as a means instead of an end, controlling it to prevent negative impact. Our experience tells us that by adopting a human-centred approach, we can ensure that AI adds value and positively impacts our digital lives.
Reach our expert if you want to know more about our experience within AI technologies and innovation.