It’s hard to escape the excitement surrounding Artificial Intelligence and what this is going to mean for leaders of teams inside of our organisations.
But before we get too carried away with the concept of robots invading the coffee area, stealing our jobs and our favourite biscuits, it’s a key area for HR Executives to consider. This is in terms of what this could mean both for the way that teams interact not only with this new resource, but also with each other and making decisions.
I am huge fan of the termed coined by Erik Brynjolfsson in his TED talk that we need to race with the machines – not against them!
“A Team of humans and computers can beat any computer or any human alone”
Robots as our team mates
Teams faced with challenges such as mass industry disruption from start-ups, economic instability or political uncertainty must leverage diversity of thought to ask the right questions to this new member of the team.
Leadership and development departments and leadership training offerings are dominated by programs such as effective teamwork and high-performing teams that were designed when an AI capability didn’t exist. Effective teamwork and high-performing teams are, of course, still relevant, but these are table stakes in this new age of Artificial Intelligence.
Teams need to build a set of practices that I characterize as rituals, checklists and challenging techniques that will help them interact effectively with this new resource. In my recent book Seeing Around Corners I refer to this way of working as a paradigm of ‘messy teams,’ with examples of business and data scientists working together. The same holds true for AI – its all about curiosity and asking the right questions.
Following research with more than 2000 leaders, I discovered that messy teams are committed to establishing the following habits and behaviours through everyday interactions:
- Openness and willingness to change minds by not ‘falling in love with the plan’
- Feel comfortable working with uncertainty
- Use frameworks and checklists to explore different perspectives from both domain and analytics expertise
- Challenge assumptions through every decision
- Never stop at the first good idea
- Use competing hypotheses rather than seeking a data report to support a preferred theory
Teams need to attract other members from the organization with a particular thinking style as an advantage point to their discussions and give them a clear remit to challenge, challenge, challenge. Capturing these alternative perspectives will be of critical importance when dealing with large, complex situations that AI can handle.
The biggest risk and opportunity for big data and artificial intelligence lies in the team. Get the team rituals right and teams will be in a strong position to embrace AI. With these techniques and behaviours, teams should try to bridge the divide between analytical and domain expertise and the robots will become part of the team.
Building Trust between AI and business teams
Here are 4 ways teams can start to effectively collaborate advanced analytics and AI:
- Align to what’s important
Use your purpose as a handrail for discovery and build a shared understanding of the team’s ambitions. Allocate time to interpret, challenge and understand this context. Be open to debate around this, but the primacy is on the team here, not on the individual or bit of technology. Remember ‘One Team’ is more important than ever before as we turn into this new era.
When researching for Seeing Around Corners, I interviewed a US Navy Seal and we discussed the gap between the data people and the business people in the teams that he has served. He said that the gap is the least visible with stronger focus and clarity on the mission and purpose. Here, domain and analytics titles are irrelevant, as everyone shares the same heightened level of focus. This is the hard bit when dealing with robotics. Humans still need to act as the guide and oversee the decision-making process.
2. Remove rank and power
The leader in the information age is a facilitator for discussions. Leaders should ensure that they are staging the right discussion and posing the right questions to AI technology, allowing authority to shift back and forth between analytics and domain experience in the team. The perception of “I’m in charge here” or “It’s my project” is not good.
3. Collaboration – low volume high quality
Face-to-face communication: walk down the corridor, stage the meeting in a room, coffee shop or park green. It doesn’t matter where, but do not let technology get in the way. Rely on the thinking muscles of the team first, not electronic means. For remote teams, this is the same.
Use technology to stage the right discussion: a what-if analysis or outside-in thinking will trigger the right questions to ask AI – through technology. Challenge and debate must still remain.
4. Challenge Group think
We have all been subject to peer pressure from a young age in different forms, and while this pressure to conform can be a positive one, too much conformity will inhibit a team’s ability to challenge its collective understanding and integrate data and analytical thinking into its decision-making process.
Organizations want employees to be ‘one team’ and align around a common set of behaviours. As I have described in previously, this is a good thing.
United behaviours are good.
United thinking is bad.
However, leaders need to be aware of the dangers of groupthink that can shortcut this, so that the team’s decision is not challenged. Overcoming groupthink requires acceptance of authentic dissent – which can be difficult, as groups often shun dissenters. As AI starts to operate within teams and offer alternative perspectives, the dynamics and rituals of the team have to accommodate and be prepared for this challenge. If no one listens to and accommodates the alternative viewpoint, all will be lost and millions of dollars squandered as you skip to your next meeting.
Leaders who will win with AI
The role of the leader in this new era now turns to setting the right environment and conditions to stage the required broad range of thinking skills to get the best questions from their people to apply to AI technology. I believe that this type of leadership actually enables teams to think for themselves, to develop ideas and questions in a bottom-up manner.
I am excited for how AI and robotics will be part of the team in the future. Ironically or not, it’s still a human challenge.