Stuff South Africa https://stuff.co.za South Africa's Technology News Hub Mon, 18 Mar 2024 07:10:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 Stuff South Africa South Africa's Technology News Hub clean Something felt ‘off’ – how AI messed with human research, and what we learned https://stuff.co.za/2024/03/18/how-ai-messed-with-human-research-what-we/ https://stuff.co.za/2024/03/18/how-ai-messed-with-human-research-what-we/#respond Mon, 18 Mar 2024 07:10:19 +0000 https://stuff.co.za/?p=190880 All levels of research are being changed by the rise of artificial intelligence (AI). Don’t have time to read that journal article? AI-powered tools such as TLDRthis will summarise it for you.

Struggling to find relevant sources for your review? Inciteful will list suitable articles with just the click of a button. Are your human research participants too expensive or complicated to manage? Not a problem – try synthetic participants instead.

Each of these tools suggests AI could be superior to humans in outlining and explaining concepts or ideas. But can humans be replaced when it comes to qualitative research?

This is something we recently had to grapple with while carrying out unrelated research into mobile dating during the COVID-19 pandemic. And what we found should temper enthusiasm for artificial responses over the words of human participants.

Encountering AI in our research

Our research is looking at how people might navigate mobile dating during the pandemic in Aotearoa New Zealand. Our aim was to explore broader social responses to mobile dating as the pandemic progressed and as public health mandates changed over time.

As part of this ongoing research, we prompt participants to develop stories in response to hypothetical scenarios.

In 2021 and 2022 we received a wide range of intriguing and quirky responses from 110 New Zealanders recruited through Facebook. Each participant received a gift voucher for their time.

Participants described characters navigating the challenges of “Zoom dates” and clashing over vaccination statuses or wearing masks. Others wrote passionate love stories with eyebrow-raising details. Some even broke the fourth wall and wrote directly to us, complaining about the mandatory word length of their stories or the quality of our prompts.

A human-generated story about dating during the pandemic.

These responses captured the highs and lows of online dating, the boredom and loneliness of lockdown, and the thrills and despair of finding love during the time of COVID-19.

But, perhaps most of all, these responses reminded us of the idiosyncratic and irreverent aspects of human participation in research – the unexpected directions participants go in, or even the unsolicited feedback you can receive when doing research.

But in the latest round of our study in late 2023, something had clearly changed across the 60 stories we received.

This time many of the stories felt “off”. Word choices were quite stilted or overly formal. And each story was quite moralistic in terms of what one “should” do in a situation.

Using AI detection tools, such as ZeroGPT, we concluded participants – or even bots – were using AI to generate story answers for them, possibly to receive the gift voucher for minimal effort.

Moralistic and stilted: an AI-generated story about dating during the pandemic.

Contrary to claims that AI can sufficiently replicate human participants in research, we found AI-generated stories to be woeful.

We were reminded that an essential ingredient of any social research is for the data to be based on lived experience.

Is AI the problem?

Perhap the biggest threat to human research is not AI, but rather the philosophy that underscores it.

It is worth noting the majority of claims about AI’s capabilities to replace humans come from computer scientists or quantitative social scientists. In these types of studies, human reasoning or behaviour is often measured through scorecards or yes/no statements.

This approach necessarily fits human experience into a framework that can be more easily analysed through computational or artificial interpretation.

In contrast, we are qualitative researchers who are interested in the messy, emotional, lived experience of people’s perspectives on dating. We were drawn to the thrills and disappointments participants originally pointed to with online dating, the frustrations and challenges of trying to use dating apps, as well as the opportunities they might create for intimacy during a time of lockdowns and evolving health mandates.


Read More: Emotion-tracking AI on the job: Workers fear being watched – and misunderstood


In general, we found AI poorly simulated these experiences.

Some might accept generative AI is here to stay, or that AI should be viewed as offering various tools to researchers. Other researchers might retreat to forms of data collection, such as surveys, that might minimise the interference of unwanted AI participation.

But, based on our recent research experience, we believe theoretically-driven, qualitative social research is best equipped to detect and protect against AI interference.

There are additional implications for research. The threat of AI as an unwanted participant means researchers will have to work longer or harder to spot imposter participants.

Academic institutions need to start developing policies and practices to reduce the burden on individual researchers trying to carry out research in the changing AI environment.

Regardless of researchers’ theoretical orientation, how we work to limit the involvement of AI is a question for anyone interested in understanding human perspectives or experiences. If anything, the limitations of AI reemphasise the importance of being human in social research.


  • Alexandra Gibson is a Senior Lecturer in Health Psychology, Te Herenga Waka — Victoria University of Wellington
  • Alex Beattie is a Research Fellow, School of Health, Te Herenga Waka — Victoria University of Wellington
  • This article first appeared in The Conversation

]]>
https://stuff.co.za/2024/03/18/how-ai-messed-with-human-research-what-we/feed/ 0
Emotion-tracking AI on the job: Workers fear being watched – and misunderstood https://stuff.co.za/2024/03/13/emotion-tracking-ai-on-the-job-workers/ https://stuff.co.za/2024/03/13/emotion-tracking-ai-on-the-job-workers/#respond Wed, 13 Mar 2024 07:36:50 +0000 https://stuff.co.za/?p=190741 Emotion artificial intelligence (AI) uses biological signals such as vocal tone, facial expressions and data from wearable devices as well as text and how people use their computers, promising to detect and predict how someone is feeling. It is used in contexts both mundane, like entertainment, and high stakes, like the workplace, hiring and health care.

A wide range of industries already use emotional AI, including call centres, finance, banking, nursing and caregiving. Over 50% of large employers in the U.S. use emotional AI aiming to infer employees’ internal states, a practice that grew during the COVID-19 pandemic. For example, call centres monitor what their operators say and their tone of voice.

Scholars have raised concerns about emotion AI’s scientific validity and its reliance on contested theories about emotion. They have also highlighted emotion AI’s potential for invading privacy and exhibiting racialgender and disability bias.

Some employers use the technology as though it were flawless, while some scholars seek to reduce its bias and improve its validitydiscredit it altogether or suggest banning emotional AI, at least until more is known about its implications.

I study the social implications of technology. I believe that it is crucial to examine emotion AI’s implications for people subjected to it, such as workers – especially those marginalized by their race, gender or disability status.

Workers’ concerns

To understand where emotion AI used in the workplace is going, my colleague Karen Boyd and I set out to examine inventors’ conceptions of emotion AI in the workplace. We analyzed patent applications that proposed emotion AI technologies for the workplace. Purported benefits claimed by patent applicants included assessing and supporting employee well-being, ensuring workplace safety, increasing productivity and aiding in decision-making, such as making promotions, firing employees and assigning tasks.

We wondered what workers think about these technologies. Would they also perceive these benefits? For example, would workers find it beneficial for employers to provide well-being support to them?

My collaborators Shanley CorviteKat RoemmichTillie Ilana Rosenberg and I conducted a survey partly representative of the U.S. population and partly oversampled for people of colour, trans and nonbinary people and people living with mental illness. These groups may be more likely to experience harm from emotion AI. Our study had 289 participants from the representative sample and 106 participants from the oversample. We found that 32% of respondents reported experiencing or expecting no benefit to them from emotion AI use, whether current or anticipated, in their workplace.

While some workers noted potential benefits of emotion AI use in the workplace like increased well-being support and workplace safety, mirroring benefits claimed in patent applications, all also expressed concerns. They were concerned about harm to their well-being and privacy, harm to their work performance and employment status, and bias and mental health stigma against them.

For example, 51% of participants expressed concerns about privacy, 36% noted the potential for incorrect inferences employers would accept at face value, and 33% expressed concern that emotion AI-generated inferences could be used to make unjust employment decisions.

Participants’ voices

One participant who had multiple health conditions said: “The awareness that I am being analyzed would ironically have a negative effect on my mental health.” This means that despite emotion AI’s claimed goals to infer and improve workers’ well-being in the workplace, its use can lead to the opposite effect: well-being diminished due to a loss of privacy. Indeed, other work by my colleagues Roemmich, Florian Schaub and I suggests that emotion AI-induced privacy loss can span a range of privacy harms, including psychological, autonomy, economic, relationship, physical and discrimination.

On concerns that emotional surveillance could jeopardize their job, a participant with a diagnosed mental health condition said: “They could decide that I am no longer a good fit at work and fire me. Decide I’m not capable enough and not give a raise, or think I’m not working enough.”

Participants in the study also mentioned the potential for exacerbated power imbalances and said they were afraid of the dynamic they would have with employers if emotion AI were integrated into their workplace, pointing to how emotion AI use could potentially intensify already existing tensions in the employer-worker relationship. For instance, a respondent said: “The amount of control that employers already have over employees suggests there would be few checks on how this information would be used. Any ‘consent’ [by] employees is largely illusory in this context.”

Lastly, participants noted potential harms, such as emotion AI’s technical inaccuracies potentially creating false impressions about workers, and emotion AI creating and perpetuating bias and stigma against workers. In describing these concerns, participants highlighted their fear of employers relying on inaccurate and biased emotion AI systems, particularly against people of colour, women and trans individuals.

For example, one participant said: “Who is deciding what expressions ‘look violent,’ and how can one determine people as a threat just from the look on their face? A system can read faces, sure, but not minds. I just cannot see how this could actually be anything but destructive to minorities in the workplace.”

Participants noted that they would either refuse to work at a place that uses emotion AI – an option not available to many – or engage in behaviours to make emotion AI read them favourably to protect their privacy. One participant said: “I would exert a massive amount of energy masking even when alone in my office, which would make me very distracted and unproductive,” pointing to how emotion AI use would impose additional emotional labour on workers.

Worth the harm?

These findings indicate that emotion AI exacerbates existing challenges experienced by workers in the workplace, despite proponents claiming emotion AI helps solve these problems.

If emotion AI does work as claimed and measures what it claims to measure, and even if issues with bias are addressed in the future, there are still harms experienced by workers, such as the additional emotional labour and loss of privacy.


Read More: Demand for computer chips fuelled by AI could reshape global politics and security


If these technologies do not measure what they claim or they are biased, then people are at the mercy of algorithms deemed to be valid and reliable when they are not. Workers would still need to expend the effort to try to reduce the chances of being misread by the algorithm or to engage in emotional displays that would read favourably to the algorithm.

Either way, these systems function as panopticon-like technologies, creating privacy harms and feelings of being watched.


]]>
https://stuff.co.za/2024/03/13/emotion-tracking-ai-on-the-job-workers/feed/ 0 Can AI Detect Your Emotions? nonadult
Don’t be alarmed: AI won’t leave half the world unemployed https://stuff.co.za/2016/02/18/dont-be-alarmed-ai-wont-leave-half-the-world-unemployed/ Thu, 18 Feb 2016 22:00:00 +0000 https://stuff.co.za2016/02/18/dont-be-alarmed-ai-wont-leave-half-the-world-unemployed/

Recent alarmist headlines this week claim artificial intelligence (AI) will put half of us out of work.

These headlines – and there were severalstem from comments by Rice University’s computer scientist Moshe Vardi who at the weekend asked what society would do when, within 30 years, machines become capable of doing almost any job a human can.

As ever, reality is likely to be far more nuanced than sensational headlines.

The most detailed study in this area came out in September 2013 from the Oxford Martin School. This report predicted that 47% of jobs in the US were under threat of automation. Similar studies have since been performed for other countries, reaching broadly similar conclusions.

Now, there’s a lot I would disagree with in the Oxford report. But, for the sake of the discussion here, let’s just suppose for a moment that the report is correct.

Even with this assumption, you cannot conclude that half of us will be unemployed in 30 or so years. The Oxford report merely estimated the number of jobs that are potentially automatable over the next few decades. There are many reasons why this will not translate into 47% unemployment.

We still want a human on the job

The report merely estimated the number of jobs that are susceptible to automation. Some of these jobs won’t be automated in practice for economical, societal, technical and other reasons.

For example, we can pretty much automate the job of an airline pilot today. Indeed, most of the time, a computer is flying your plane. But society is likely to continue to demand the reassurance of having a pilot on board even if they are just reading their iPad most of the time.

As a second example, the Oxford report gives a 94% chance for bicycle repairer to be automated. But it is likely to be very expensive and difficult to automate this job, and therefore uneconomic to do so.

We also need to consider all the new jobs that technology will create. For example, we don’t employ many printers setting type any more. But we do employ many more people in the digital equivalent, making web pages.

Of course, if you are a printer and your job is destroyed, it helps if you’re suitably educated so you can re-position yourself in one of these new industries.

Some of these jobs will only be partially automated, and automation will in fact enhance a person’s ability to do the job. For example, the Oxford report gives a 98% chance of umpiring or refereeing to be automated. But we are likely to have just as many if not more umpires and referees in the future, even if they use technologies to do their job better.

Automation can create employment

In fact, the US Department of Labor predicts that we will see a 5% increase in umpires and referees over the next decade.

The Oxford report give a 63% chance for geoscientists to be automated. But automation is more likely to permit geoscientists to do more geoscience.

Indeed, the US Department of Labor actually predicts the next decade will see a 10% increase in the number of geoscientists as we seek to make more of the planet’s diminishing resources.

We also need to consider how the working week will change over the next few decades. Most countries in the developed world have seen the number of hours worked per week decrease significantly since the start of the industrial revolution.

In the US, the average working week has declined from around 60 hours to just 33. Other developed countries are even lower. Germans only work 26 hours per week. If these trends continue, we will need to create more jobs to replace these lost hours.

In my view, it’s hard to predict with any certainty how many of us will really be unemployed in a few decades time but I am very sceptical that it will be half of us. Society would break down well before we get to 50% unemployment.

My guess is it will be at most half of this prediction, 25% at most. This is nevertheless an immense change, and one that we need to start planning for and mitigating against today.

]]>