Zimbabwe: Can Machines Help People Lead More Meaningful Lives?

22 October 2024

Artificial Intelligence (AI) is shaking up business and society in ways that would leave even the greatest philosophers grappling for answers.

While Aristotle's insights into leading a good life still resonate, AI is presenting new questions about who we are and what it means to be human in the age of automation.

These are not just abstract concerns; they are questions that touch every level of our society, especially in the business world.

AI is not simply another technological advance like the printing press or the steam engine -- it is a whole new category. It is not just about changing how we work; it is making us re-evaluate why we work and what gives our lives meaning.

If we delegate tasks to machines, what is left for humans to contribute? And beyond efficiency, what role does AI play in enhancing our quality of life?

Existentialism meets the machine

Philosophers like Sartre and Heidegger were all about freedom, choice, and the search for personal meaning, but AI is triggering new existential questions about work and identity.

Many fear AI could spark a crisis as people find their roles displaced by machines, especially in industries that have traditionally relied on human labour and decision-making. Think about the rise of automated warehouses or AI in financial services; in these settings, people are increasingly being replaced by algorithms.

Sartre would argue that we are fooling ourselves if we think handing over decision-making to AI absolves us of responsibility. The phrase, "The algorithm made me do it!" reflects a moral disengagement that Sartre would reject. As humans, we cannot hide behind algorithms to justify outcomes that affect employees and customers alike. Whether it is approving a loan or making hiring decisions, AI must be held to a higher ethical standard--one that business leaders should actively guide.

The human touch in business

The real challenge for business leaders is not just deploying AI efficiently, but ensuring that humans do not become peripheral. When AI takes over the heavy lifting, how do you ensure that employees do not feel like mere cogs in a machine? How do you maintain creativity, autonomy, and a sense of purpose in a workforce increasingly dependent on algorithms for day-to-day tasks?

Consider the retail industry, where AI-driven customer service bots are becoming commonplace. They handle complaints, suggest products, and process payments, but they lack the empathy and nuance that only humans can provide. To keep the human touch alive, business leaders must emphasise roles that require emotional intelligence, judgment, and creativity--traits AI cannot replicate. The sweet spot is using AI to complement human abilities, not replace them.

Utilitarianism: The greater good or just good PR?

Utilitarian thinkers like Bentham and Mill advocated for the greatest good for the greatest number. In theory, AI could help achieve this by increasing productivity and making services more accessible. In practice, though, it often gets messy. For example, when AI-driven platforms like Uber or Amazon use dynamic pricing algorithms, customers may enjoy cheaper fares or products, but what about the workers? Are the benefits distributed fairly, or do they leave a segment of society behind?

AI may boost profits and streamline operations, but what about those who lose their jobs? What about the growing sense of being constantly watched, with AI tracking every keystroke, every second of productivity? This brings about a new kind of stress for employees -- a far cry from the "greatest good." When AI creates this level of discomfort, it raises questions about how well it aligns with Mill's vision of promoting happiness.

For business leaders, it is not just about boosting the bottom line. If your AI initiatives only serve a select few -- stockholders, perhaps -- while displacing workers or eroding trust, then you are missing the larger ethical picture. Companies that care about the long-term well-being of their stakeholders, from employees to customers, will find that short-term gains from AI will not necessarily translate into long-term happiness.

Kant's take: Respect the humans, please

Immanuel Kant would likely have strong objections to some of AI's uses today. Kantian ethics emphasises treating people as ends in themselves, not merely as means to an end. The way AI has been deployed in some businesses--surveillance tools, data collection without consent, and performance algorithms--can dehumanise the workforce. People are reduced to metrics on a dashboard, and decisions are made based on data points rather than human insights.

Consider how AI is used in performance monitoring. Algorithms in warehouses, like those of Amazon, track workers' every movement, often imposing unrealistic productivity goals. Employees are punished or replaced if they do not keep up. This approach strips people of their dignity, treating them as mere inputs in a system designed to maximise output. Kant would argue that business leaders have a moral obligation to treat employees as individuals, not data points to be optimised.

Harmony is key, says Confucius

Turning to Eastern philosophy, Confucius offers wisdom about balance and social responsibility. His philosophy suggests that business leaders should be more than just profit-chasers; they should be stewards of societal well-being. AI offers incredible potential, but it also poses risks. The Confucian ideal of harmony should remind us that technology should serve the common good, not just corporate interests.

Imagine if companies embraced a Confucian approach to AI. Instead of focusing solely on efficiency or profit, they would consider how AI could benefit the broader community. For instance, AI could be used to create more inclusive services for underserved populations or improve working conditions by automating hazardous tasks. Companies that act with social responsibility in mind could lead the way in ethical AI use, setting an example for others to follow.

Wrapping it up: Philosophy meets the real world

At its core, AI in business is not just about making things faster or cheaper. It is forcing us to ask deep questions about what we value as a society and what it means to be human in a world increasingly dominated by smart machines. Business leaders need to recognise that AI is not just another tool in the toolbox; it is a force that can reshape the very fabric of our world, for better or worse.

By reflecting on different philosophical perspectives - existentialism, utilitarianism, Kantian ethics, and Confucianism -- businesses can navigate this new landscape with their humanity intact. AI has the potential to elevate human work and make life easier, but only if we use it thoughtfully. Leaders should regularly ask themselves not just "Can we do this?" but "Should we do this?" The future of work, and the well-being of the workforce, depend on the answer to that question.

In the end, we are all in this together -- humans and machines alike. Let us ensure we steer this AI-powered ship towards a future where technology serves us, rather than the other way around.

Dr Gift Kugara, PhD, is a lecturer in Analytics at London South Bank University Business School +44 (0) 7595657754| e: [email protected]

AllAfrica publishes around 500 reports a day from more than 100 news organizations and over 500 other institutions and individuals, representing a diversity of positions on every topic. We publish news and views ranging from vigorous opponents of governments to government publications and spokespersons. Publishers named above each report are responsible for their own content, which AllAfrica does not have the legal right to edit or correct.

Articles and commentaries that identify allAfrica.com as the publisher are produced or commissioned by AllAfrica. To address comments or complaints, please Contact us.