Human Oversight: Keeping the Personal Touch in M&A Dealmaking
Deal teams must avoid falling into the trap of discounting human expertise when implementing new AI tools.
By equipping deal teams with actionable and reliable real-time insights, artificial intelligence (AI) has vast potential to streamline efficiencies across the entire mergers and acquisitions (M&A) lifecycle. This transformative impact is highlighted in the 2024 Artificial Intelligence in M&A Report — our comprehensive study on AI’s anticipated impact on dealmaking. According to our global survey, 62 percent of respondents believe AI is creating new opportunities in the M&A industry.
AI’s entrance into the M&A world has been gradual and it is anticipated to become increasingly more pervasive. The consultancy firm Bain and Company recently predicted that generative AI will be employed in 80 percent of M&A processes within three years. This is a huge jump from the current share of just 16 percent.
While the reliability of AI technologies is continually improving, the output can only be as reliable as the input. Dealmakers must be aware of the possibility of bias, privacy issues and data control when deploying AI tools
With machine learning (ML) and natural language processing (NLP) technologies increasingly requiring less human input, it is crucial to make sure the early stages of implementation are done right. Human involvement is still critical to safeguard quality control, mitigate bias and apply final judgment in the M&A process. Only when this symbiotic relationship is in harmony can optimal results be generated, potentially unlocking significant new value in the M&A process.
Building a strong foundation
A harmonious relationship is not a given, however, and key challenges exist for dealmakers to consider. Low-quality data and the risk of information being siloed present major hurdles. It’s a reminder of the need to apply human judgment — not only in the final step of decision-making but also in early-stage data entry processing and management.
If the data used to train the technology is unreliable or inaccurate, the AI-generated output may ultimately prove ineffective. This concern is shared with survey respondents, who cited data reliability as their top concern when it comes to AI legal and regulatory compliance.
Avoid the bias trap
The potential for “learned bias” in AI systems is another ongoing issue that developers are grappling with. Again, this points to the importance of building a reliable foundational dataset. The problem of data bias goes beyond getting suboptimal data output — there are ethical and social considerations at play, too. If the historical data used to train AI systems contains bias — for example, a disproportionate number of male candidates in leadership roles — there is a risk that the AI will unfairly favor these candidates in the future.
Some high-profile examples have emerged of AI systems proving their inherent bias. Amazon, for example, decided to shelve an experimental AI recruitment tool after finding it unfairly discriminated against women. Such instances highlight the dangers of being overly reliant on technology while failing to guide findings with impartial human judgment.
The issues this poses for M&A transactions, which rely on objective, impartial analysis, are plain to see. Eliminating the possibility of bias should be a top priority. Our survey respondents agree, with 18 percent identifying potential algorithm bias as their primary legal and regulatory compliance concern relating to AI.
Keep humans at the forefront
Ultimately, dealmaking relies on human insight. For AI tools to be most effective, human intuition and experience must always be applied in final decision-making. In this way, AI and deal teams must work to complement each other rather than operating as separate entities.
AI cannot yet replace the experience of advisors or the nuanced conversations between parties with different needs and temperaments. For maximum value to be generated, deal teams should use AI to complement their existing practices rather than try to replicate human intuition and/or supersede practical transactional experience.
Strike the right balance
Dealmakers appear to be aware of the complex ethical considerations at play, with almost a quarter of our survey respondents (23 percent) already prioritizing While AI can be an extremely valuable tool in the M&A process, deal teams should not become too dependent on the technology and omit human oversight in the process the implementation of dedicated AI ethics and data security protocols to manage the impact of AI on their deal teams.
If a careful balance is struck, deal teams can leverage AI to handle manually intensive processes, allowing them to focus on more value-generating tasks. By working in harmony with AI, teams can streamline efficiencies while maintaining their ability to apply human expertise to findings. As AI technologies continue to develop rapidly, there is no time to lose in working toward this goal.
To learn more, download the 2024 Artificial Intelligence in M&A Report here