Stuff South Africa https://stuff.co.za South Africa's Technology News Hub Mon, 18 Mar 2024 07:10:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 Stuff South Africa South Africa's Technology News Hub clean Something felt ‘off’ – how AI messed with human research, and what we learned https://stuff.co.za/2024/03/18/how-ai-messed-with-human-research-what-we/ https://stuff.co.za/2024/03/18/how-ai-messed-with-human-research-what-we/#respond Mon, 18 Mar 2024 07:10:19 +0000 https://stuff.co.za/?p=190880 All levels of research are being changed by the rise of artificial intelligence (AI). Don’t have time to read that journal article? AI-powered tools such as TLDRthis will summarise it for you.

Struggling to find relevant sources for your review? Inciteful will list suitable articles with just the click of a button. Are your human research participants too expensive or complicated to manage? Not a problem – try synthetic participants instead.

Each of these tools suggests AI could be superior to humans in outlining and explaining concepts or ideas. But can humans be replaced when it comes to qualitative research?

This is something we recently had to grapple with while carrying out unrelated research into mobile dating during the COVID-19 pandemic. And what we found should temper enthusiasm for artificial responses over the words of human participants.

Encountering AI in our research

Our research is looking at how people might navigate mobile dating during the pandemic in Aotearoa New Zealand. Our aim was to explore broader social responses to mobile dating as the pandemic progressed and as public health mandates changed over time.

As part of this ongoing research, we prompt participants to develop stories in response to hypothetical scenarios.

In 2021 and 2022 we received a wide range of intriguing and quirky responses from 110 New Zealanders recruited through Facebook. Each participant received a gift voucher for their time.

Participants described characters navigating the challenges of “Zoom dates” and clashing over vaccination statuses or wearing masks. Others wrote passionate love stories with eyebrow-raising details. Some even broke the fourth wall and wrote directly to us, complaining about the mandatory word length of their stories or the quality of our prompts.

A human-generated story about dating during the pandemic.

These responses captured the highs and lows of online dating, the boredom and loneliness of lockdown, and the thrills and despair of finding love during the time of COVID-19.

But, perhaps most of all, these responses reminded us of the idiosyncratic and irreverent aspects of human participation in research – the unexpected directions participants go in, or even the unsolicited feedback you can receive when doing research.

But in the latest round of our study in late 2023, something had clearly changed across the 60 stories we received.

This time many of the stories felt “off”. Word choices were quite stilted or overly formal. And each story was quite moralistic in terms of what one “should” do in a situation.

Using AI detection tools, such as ZeroGPT, we concluded participants – or even bots – were using AI to generate story answers for them, possibly to receive the gift voucher for minimal effort.

Moralistic and stilted: an AI-generated story about dating during the pandemic.

Contrary to claims that AI can sufficiently replicate human participants in research, we found AI-generated stories to be woeful.

We were reminded that an essential ingredient of any social research is for the data to be based on lived experience.

Is AI the problem?

Perhap the biggest threat to human research is not AI, but rather the philosophy that underscores it.

It is worth noting the majority of claims about AI’s capabilities to replace humans come from computer scientists or quantitative social scientists. In these types of studies, human reasoning or behaviour is often measured through scorecards or yes/no statements.

This approach necessarily fits human experience into a framework that can be more easily analysed through computational or artificial interpretation.

In contrast, we are qualitative researchers who are interested in the messy, emotional, lived experience of people’s perspectives on dating. We were drawn to the thrills and disappointments participants originally pointed to with online dating, the frustrations and challenges of trying to use dating apps, as well as the opportunities they might create for intimacy during a time of lockdowns and evolving health mandates.


Read More: Emotion-tracking AI on the job: Workers fear being watched – and misunderstood


In general, we found AI poorly simulated these experiences.

Some might accept generative AI is here to stay, or that AI should be viewed as offering various tools to researchers. Other researchers might retreat to forms of data collection, such as surveys, that might minimise the interference of unwanted AI participation.

But, based on our recent research experience, we believe theoretically-driven, qualitative social research is best equipped to detect and protect against AI interference.

There are additional implications for research. The threat of AI as an unwanted participant means researchers will have to work longer or harder to spot imposter participants.

Academic institutions need to start developing policies and practices to reduce the burden on individual researchers trying to carry out research in the changing AI environment.

Regardless of researchers’ theoretical orientation, how we work to limit the involvement of AI is a question for anyone interested in understanding human perspectives or experiences. If anything, the limitations of AI reemphasise the importance of being human in social research.


  • Alexandra Gibson is a Senior Lecturer in Health Psychology, Te Herenga Waka — Victoria University of Wellington
  • Alex Beattie is a Research Fellow, School of Health, Te Herenga Waka — Victoria University of Wellington
  • This article first appeared in The Conversation

]]>
https://stuff.co.za/2024/03/18/how-ai-messed-with-human-research-what-we/feed/ 0
OpenAI announces Sora, a new text-to-video tool https://stuff.co.za/2024/02/19/openai-announces-sora-text-to-video-tool/ Mon, 19 Feb 2024 09:31:36 +0000 https://stuff.co.za/?p=189786 OpenAI recently announced Sora, its new generative AI model designed to create shockingly impressive videos from text prompts. It’s built on Dall-E 3, the company’s image-generation model which itself uses a version of the company’s GPT large language model.

This isn’t the first text-to-video tool to emerge from the generative AI boom but, based on the examples shown, it generates the most realistic videos we’ve seen so far.

While Sora hasn’t received a full release yet, it is already capable of creating “complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background,” while also understanding the context of the user’s prompt and how it might affect the simulated physical world.

It can also provide multiple shots of the generated video from the original prompt while maintaining the visual style and any persistent subjects or characters in the prompt.

Sora still makes mistakes, how long will that last?

Image: OpenAI

When Sora eventually becomes available, it won’t be without limitations. Generated videos will be capped at 60 seconds, at least initially, so we probably won’t see any feature-length Sora-generated films in a hurry.

As with any generative AI model, Sora is still prone to mistakes. OpenAI says Sora struggles to accurately simulate a complex scene’s physics and has trouble with “specific instances of cause and effect.” Adding a bite mark to a cookie someone’s tasted, for example.

Sora is currently only available to OpenAI’s ‘Red Team’. These are folks who look for possible ways it could be abused or exploited, like prompting it with malicious material, to learn how the model reacts so they can make adjustments to prevent the same reaction when it launches — kinda like breaking into your own house to find your security weak points.

OpenAI is also working with visual artists and filmmakers who will hopefully provide constructive feedback to improve the model before a wider release.

There are already examples of short films made with AI-generated content, like Sunspring, which was written by an AI model trained on existing movie scripts. That was released in 2018, a good few years before ChatGPT was introduced — and it shows. It also still required humans to act and shoot the film.

With Sora, it may soon be possible to remove humans from the process altogether. We’re sure Martin Scorsese is thrilled about that.

]]>
ChatGPT Archives - Stuff South Africa nonadult
Chat with RTX turns your graphics card into a locally hosted AI chatbot https://stuff.co.za/2024/02/15/chat-with-rtx-turns-gpu-into-ai-chatbot/ Thu, 15 Feb 2024 08:56:29 +0000 https://stuff.co.za/?p=189669 If you aren’t satisfied with the current AI chatbot offerings, Nvidia recently released a new one that works a little differently from the rest – ‘Chat with RTX’ is available right now as a free demo that runs locally on your Windows PC.

Instead of using cloud-based LLM (large language model) services like OpenAI’s ChatGPT or Microsoft’s Copilot, Nvidia says Chat with RTX allows users to quickly and easily “connect local files on a PC as a dataset to an open-source large language model like Mistral or Llama 2.”

The examples shown in Nvidia’s demo include things like asking for the name of a restaurant that someone recommended. Chat with RTX produced the answer with links to the relevant files as references.

Chat with RTX instead of humans

It supports common file formats like .txt, .pdf, .doc or .docx, and .xml and will even support URLs of YouTube videos and playlists. Seeing as it only runs locally, it doesn’t require an internet connection (unless you want it to watch YouTube, presumably) so it won’t share your data with Nvidia or any other third-party servers making it a more personable and secure AI chatbot.

It might work differently from other AI chatbots but that doesn’t mean it is immune to the same bugs. It is still a free demo, so don’t expect a perfectly polished product. It also comes with its own limitations and hardware requirements.

Instead of using Nvidia-powered cloud servers like most of its AI cousins, Chat with RTX “uses retrieval-augmented generation (RAG), NVIDIA TensorRT-LLM software and NVIDIA RTX acceleration” to turn your GPU into something equivalent. That’s no easy task so you’ll need to have an Nvidia GeForce RTX 30 or 40 series GPU with at least 8GB of VRAM and running Windows for it to work.

]]>
Create A Personalized AI Chatbot with Chat With RTX nonadult
Google Gemini replaces Bard as catch-all AI platform https://stuff.co.za/2024/02/09/google-gemini-replaces-bard/ Fri, 09 Feb 2024 10:16:24 +0000 https://stuff.co.za/?p=189438 Google announced on Thursday that it is consolidating its AI offerings by folding everything AI-related that it currently offers into the Google Gemini brand. It also announced a new Android app and overpriced Google One subscription tier, a year after Bard sang its first ballad.

If you can remember as far back as February last year, Google Bard’s launch came hot on the heels of Microsoft’s Copilot launch. That celebrated its first birthday this week with a redesign and Super Bowl ad, now it’s Google’s turn.

Google Gemini

Included in Google Gemini is The Chatbot Formerly Known as Bard, Google’s Duet AI features aimed at developers, and Gemini Ultra 1.0 — the new version of the company’s large language model (LLM).

For most folks, the easiest way to experience Gemini will be through mobile apps — there’s a new Google Gemini app for Android while iPhone users will find Gemini in the Google app — but everyone outside the US will have to wait until next week for the wider rollout.

You won’t have to wait to start giving Google your money, however. The new ‘AI Premium’ tier of Google One is already available to South Africans for R430/m. This gives you 2TB of Google Drive storage, access to the Gemini Ultra 1.0 LLM, and, eventually, Gemini’s help in Google Workspace apps like Google Docs and Sheets.

That might sound like a lot of money but it’s roughly the same price as a ChatGPT Plus subscription. But Google Gemini is going to need more than a similarly priced subscription if it hopes to distinguish itself from the AI competition.

Source

]]>
Bard becomes Gemini | Ultra 1.0 and a new mobile app nonadult
Microsoft Copilot celebrates 1st birthday with redesign on web and mobile https://stuff.co.za/2024/02/08/microsoft-copilot-1st-birthday-redesign/ Thu, 08 Feb 2024 08:56:21 +0000 https://stuff.co.za/?p=189369 What better way to celebrate your first birthday than with a fresh coat of paint? Well, we could think of a few other ideas but Microsoft has gone with the paint option for Copilot.

Launched a year ago, Microsoft says Copilot’s web interface and mobile app (available on iOS and Android) now have “a more streamlined look and feel” which will supposedly make it easier to “bring your ideas to life” and “gain understanding about the world”. There’s also a “fun” new set of suggested prompts because some people need a little help imagining things.

Oh, and there’s a new Copilot Super Bowl ad

As luck would have it, this “significant new update” doesn’t only mark Copilot’s first birthday but also happily lands a few days before Super Bowl Sunday, the world’s biggest (and only) American football championship game. We don’t need to tell you it’s a big deal if you’re in America. For everyone else, the halftime show sometimes has a few good ads… we guess.

The new paint job and ad aren’t the only changes to emerge. Microsoft has also improved the platform’s image-editing and creation feature called ‘Designer in Copilot’. Free users can now edit their generated images inline without leaving the chat.

Those who cough up some cash for the Copilot Pro subscription can also resize or regenerate images. Finally, Microsoft also mentioned something called ‘Designer GPT’ that will roll out soon and provide a “dedicated canvas” within the platform so you can “visualize your ideas”.

Sounds riveting. Here’s the ad.

]]>
Microsoft Game Day Commercial | Copilot: Your everyday AI companion nonadult
AI in HR: Are you cool with being recruited by a robot? Our studies reveal job candidates’ true feelings https://stuff.co.za/2024/01/28/ai-in-hr-are-you-cool-with-being-recruited/ Sun, 28 Jan 2024 14:51:13 +0000 https://stuff.co.za/?p=188905 Artificial Intelligence (AI) is transforming the human resource management (HRM) industry faster than we notice. Sixty-five percent of organisations are already using AI-enabled tools in the hiring process, but only a third of job candidates are aware of the practice.

Pros and cons of AI in recruitment

In recruitment, AI-enabled tools have the ability to collect large amounts of organisational data to search, identify, evaluate, rank, and select job candidates. They can assemble information on hiring needs across teams, generate advertisements with model candidate traits, and highlight potential candidates from a range of digital platforms.

AI-enabled tools have long promised efficiency in the processing of applicants’ documents while potentially reducing the bias from HR agents who might, intentionally or not, discriminate or unjustly judge some applications.

However, emerging evidence suggests that AI-enabled HR tools may discriminate certain candidates who may not fit the historical pattern for the job description, such as candidates who are female (in STEM) or those with gaps on their resumes due to illness, disabilities, caring for a family member, unemployment, or time served in prison.

Those of us who worry about the use of AI in HR won’t be reassured by its track record in other fields. Tech giants including Apple, IBM, and Microsoft – all of whom presumably know what they’re doing – have faced scrutiny for ethical failures, especially with regards to gender discrimination. For example, US regulators investigated Apple in 2019 after its AI-powered credit-card service was revealed to be systematically offering women lower credit limits. The alarm was raised by several couples, including Steve Wozniak himself, co-founder of Apple, and his wife, for whom the credit-card algorithm was offering the man a higher credit limit, even though the couple had joint accounts.

Perceptions matter

Available data on AI in recruitment suggests that job seekers are instinctively critical of its use. Candidates subjected to autonomous AI decisions describe the process as “undignified” or “unfair”.

Other research suggests that judgement is less harsh in different contexts. According to a November 2023 survey by Tideo, only 31% of respondents would agree to allow AI to decide whether or not they get hired. But that figure rises to 75% if there’s also a human presence involved in the process. Still, 25% of participants believe that any use of artificial intelligence in recruitment is unfair.

Prior to our research, ethical perceptions of organisations using AI-enabled tools in the hiring process hadn’t been studied much. Most scholarly research on the topic focused on the fairness of the practice or trust in the technology — for example, chatbots — rather than trust in the organisations themselves.


Read More: An international body will need to oversee AI regulation, but we need to think carefully about what it looks like


In two publications in the Journal of Business Ethics, we looked at how the use of AI in hiring might impact job seekers’ or recently hired individuals’ trust in the company. We found that their perceptions of AI determine whether they identify the organisation using it as trustworthy or even attractive and innovative.

Perceptions vary depending on individuals’ personal values, past experiences, and technology acceptance. They also vary across contexts and applications. For instance, whereas an individual might trust the effectiveness of AI to predict movie preferences, studies show that most would still prefer a human or a human-AI collaboration (i.e., versus autonomous AI) to make a hiring determination.

Ethics are attractive

In a June 2022 study on AI ethics and organisational trust, we found that candidates who perceive AI in the hiring process as highly effective, from a performance standpoint, are 64% more likely to trust the organisations that use it.

We followed up with a March 2023 study on a related subject. We found that the higher an individual’s ethical perceptions of using AI in hiring, the more attractive he or she finds the organisation. For instance, candidates who perceive that it is ethical for an organisation to use AI to analyse their personal social media content or analyse an audio interview for voice cues are 25% more likely to perceive that organisation as attractive.

Human-AI balance is key

Human-resources managers face an increasingly complex ethical environment, where AI involves a fast-growing set of applications. Organisations that are determined to keep the “human” in HR will need to carefully balance both in the hiring process, while taking consideration factors such as transparency and financial expectations.

Along with other studies, our research brings new urgency to the task of integrating AI ethics into the governance of every organisation.


]]>
How a New York Times copyright lawsuit against OpenAI could potentially transform how AI and copyright work https://stuff.co.za/2024/01/26/new-york-times-copyright-lawsuit-openai/ Fri, 26 Jan 2024 07:10:57 +0000 https://stuff.co.za/?p=188834 On December 27, 2023, the New York Times (NYTfiled a lawsuit in the Federal District Court in Manhattan against Microsoft and OpenAI, the creator of ChatGPT, alleging that OpenAI had unlawfully used its articles to create artificial intelligence (AI) products.

Citing copyright infringement and the importance of independent journalism to democracy, the newspaper further alleged that even though the defendant, OpenAI, may have “engaged in wide scale copying from many sources, they gave Times content particular emphasis” in training generative artificial intelligence (GenAI) tools such as Generative Pre-Trained Transformers (GPT). This is the kind of technology that underlies products such as the AI chatbot ChatGPT.

The complaint by the New York Times states that OpenAI took millions of copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to guides and more in an attempt to “free ride on the Times’s massive investment in its journalism”.

In a blog post published by OpenAI on January 8, 2024, the tech company responded to the allegations by emphasising its support of journalism and partnerships with news organisations. It went on to say that the “NYT lawsuit is without merit”.

In the months prior to the complaint being lodged by the New York Times, OpenAI had entered into agreements with large media companies such as Axel-Springer and the Associated Press, although notably, the Times failed to reach an agreement with the tech company.

The NYT case is important because it is different to other cases involving AI and copyright, such as the case brought by the online photo library Getty Images against the tech company Stability AI earlier in 2023. In this case, Getty Images alleged that Stability AI processed millions of copyrighted images using a tool called Stable Diffusion, which generates images from text prompts using AI.

The main difference between this case and the New York Times one is that the newspaper’s complaint highlighted actual outputs used by OpenAI to train its AI tools. The Times provided examples of articles that were reproduced almost verbatim.

Use of material

The defence available to OpenAI is “fair use” under the US Copyright Act 1976, section 107. This is because the unlicensed use of copyright material to train generative AI models can serve as a “transformative use” which changes the original material. However, the complaint from the New York Times also says that their chatbots bypassed the newspaper’s paywalls to create summaries of articles.

Even though summaries do not infringe copyright, their use could be used by the New York Times to try to demonstrate a negative commercial impact on the newspaper – challenging the fair use defence.

This case could ultimately be settled out of court. It is also possible that the Times’ lawsuit was more a negotiating tactic than a real attempt to go all the way to trial. Whichever way the case proceeds, it could have important implications for both traditional media and AI development.

It also raises the question of the suitability of current copyright laws to deal with AI. In a submission to the House of Lords communications and digital select committee on December 5, 2023, OpenAI claimed that “it would be impossible to train today’s leading AI models without copyrighted materials”.

It went on to say that “limiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment but would not provide AI systems that meet the needs of today’s citizens”.

Looking for answers

The EU’s AI Act – the world’s first AI Act – might give us insights into some future directions. Among its many articles, there are two provisions particularly relevant to copyright.

The first provision titled, “Obligations for providers of general-purpose AI models” includes two distinct requirements related to copyright. Section 1(C) requires providers of general-purpose AI models to put in place a policy to respect EU copyright law.

Section 1(d) requires providers of general purpose AI systems to draw up and make publicly available a detailed summary about content used for training AI systems.

While section 1(d) raises some questions, section 1(c) makes it clear that any use of copyright protected content requires the authorisation of the rights holder concerned unless relevant copyright exceptions apply. Where the rights to opt out has been expressly reserved in an appropriate manner, providers of general purpose AI models, such as OpenAI, will need to obtain authorisation from rights holders if they want to carry out text and data mining on their copyrighted works.

Even though the EU AI Act may not be directly relevant to the New York Times complaint against OpenAI, it illustrates the way in which copyright laws will be designed to deal with this fast-moving technology. In future, we are likely to see more media organisations adopting this law to protect journalism and creativity. In fact, even before the EU AI Act was passed, the New York Times blocked OpenAI from trawling its content. The Guardian followed suit in September 2023 – as did many others.


Read More: Why Elon Musk’s X has it all wrong about linking to news sites


However, the move did not allow material to be removed from existing training data sets. Therefore, any copyrighted material used by the training models up until then would have been used in OpenAI’s outputs – which led to negotiations between the New York Times and OpenAI breaking down.

With laws such as those in the EU AI Act now placing legal obligations on general purpose AI models, their future could look more constrained in the way that they use copyrighted works to train and improve their systems. We can expect other jurisdictions to update their copyright laws reflecting similar provisions to that of the EU AI Act in an attempt to protect creativity. As for traditional media, ever since the rise of the internet and social media, news outlets have been challenged in drawing readers to their sites and generative AI has simply exacerbated this issue.

This case will not spell the end of generative AI or copyright. However, it certainly raises questions for the future of AI innovation and the protection of creative content. AI will certainly continue to grow and develop and we will continue to see and experience its many benefits. However, the time has come for policymakers to take serious note of these AI developments and update copyright laws, protecting creators in the process.


  • Dinusha Mendis is a Professor of Intellectual Property and Innovation Law; Director Centre for Intellectual Property Policy and Managament (CIPPM), Bournemouth University, Bournemouth University
  • This article first appeared in The Conversation

]]>
1 in 3 people are lonely. Will AI help, or make things worse? https://stuff.co.za/2024/01/08/1-in-3-people-are-lonely-will-ai-help/ Mon, 08 Jan 2024 07:13:39 +0000 https://stuff.co.za/?p=188051 ChatGPT has repeatedly made headlines since its release late last year, with various scholars and professionals exploring its potential applications in both work and education settings. However, one area receiving less attention is the tool’s usefulness as a conversationalist and – dare we say – as a potential friend.

Some chatbots have left an unsettling impression. Microsoft’s Bing chatbot alarmed users earlier this year when it threatened and attempted to blackmail them.

Yet pop culture has long conjured visions of autonomous systems living with us as social companions, whether that’s Rosie the robot from The Jetsons, or the super-intelligent AI, Samantha, from the 2013 movie Her. Will we develop similar emotional attachments to new and upcoming chatbots? And is this healthy?

While generative AI itself is relatively new, the fields of belonging and human-computer interaction have been explored reasonably well, with results that may surprise you.

Our latest research shows that, at a time when 1 in 3 Australians are experiencing loneliness, there may be space for AI to fill gaps in our social lives. That’s assuming we don’t use it to replace people.

Can you make friends with a robot?

As far back as the popularisation of the internet, scholars have been discussing how AI might serve to replace or supplement human relationships.

When social media became popular about a decade later, interest in this space exploded. The 2021 Nobel Prize-winning book Klara and the Sun explores how humans and life-like machines might form meaningful relationships.

And with increasing interest came increasing concern, borne of evidence that belonging (and therefore loneliness) can be impacted by technology use. In some studies, the overuse of technology (gaming, internet, mobile and social media) has been linked to higher social anxiety and loneliness. But other research suggests the effects depend greatly on who is using the technology and how often they use it.

Research has also found some online roleplaying game players seem to experience less loneliness online than in the real world – and that people who feel a sense of belonging on a gaming platform are more likely to continue to use it.

All of this suggests technology use can have a positive impact on loneliness, that it does have the potential to replace human support, and that the more an individuals uses it the more tempting it becomes.

Then again, this evidence is from tools designed with a specific purpose (for instance, a game’s purpose is to entertain) and not tools designed to support human connection (such as AI “therapy” tools).

The rise of robot companions

As researchers in the fields of technology, leadership and psychology, we wanted to investigate how ChatGPT might influence people’s feelings of loneliness and supportedness. Importantly, does it have a net positive benefit for users’ wellbeing and belonging?

To study this, we asked 387 participants about their usage of AI, as well as their general experience of social connection and support. We found that:

  • participants who used AI more tended to feel more supported by their AI compared to people whose support came mainly from close friends
  • the more a participant used AI, the higher their feeling of social support from the AI was
  • the more a participant felt socially supported by AI, the lower their feeling of support was from close friends and family
  • although not true across the board, on average human social support was the largest predictor of lower loneliness.

AI friends are okay, but you still need people

Overall our results indicate that social support can come from either humans or AI – and that working with AI can indeed help people.

But since human social support was the largest predictor of lower loneliness, it seems likely that underlying feelings of loneliness can only be addressed by human connection. In simple terms, entirely replacing in-person friendships with robot friendships could actually lead to greater loneliness.

Having said that, we also found participants who felt socially supported by AI seemed to experience similar effects on their wellbeing as those supported by humans. This is consistent with the previous research into online gaming mentioned above. So while making friends with AI may not combat loneliness, it can still help us feel connected, which is better than nothing.


Read more: AI can already diagnose depression better than a doctor and tell you which treatment is best


The takeaway

Our research suggests social support from AI can be positive, but it doesn’t provide all the benefits of social support from other people – especially when it comes to loneliness.

When used in moderation, a relationship with an AI bot could provide positive functional and emotional benefits. But the key is understanding that although it might make you feel supported, it’s unlikely to help you build enough of a sense of belonging to stop you from feeling lonely.

So make sure to also get out and make real human connections. These provide an innate sense of belonging that (for now) even the most advanced AI can’t match.


  • Michael Cowling is an Associate Professor – Information & Communication Technology (ICT), CQUniversity Australia
  • Joseph Crawford is a Senior Lecturer, Management, University of Tasmania
  • Kelly-Ann Allen is an Associate Professor, School of Educational Psychology and Counselling, Faculty of Education, Monash University
  • This article first appeared in The Conversation

Acknowledgement: the authors would like to acknowledge Bianca Pani for her contributions to the research discussed in this article.

]]>
TV's Saturday Morning Cartoon Legacy: The Jetsons (Rosey: head of the household) nonadult
Light Start: Okay ChatGPT, Copilot’s new key, Xbox’s appliance sea, and NASA’s naming plea https://stuff.co.za/2024/01/05/light-start-ok-chatgpt-copilots-key-xbox/ Fri, 05 Jan 2024 10:23:06 +0000 https://stuff.co.za/?p=188007 “Hey, ChatGPT… why did the chicken cross the road?”

ChatGPT stock (LS: ChatGPT)

Okay, Google, your time is up. After spending years as Android’s default subordinate, Google Assistant is heading for the door. We’re not revealing any sudden prejudices against the search giant’s efforts, but rather making some room for a newcomer to the party: ChatGPT. OpenAI’s entry might already be available as a standalone app across Android and iOS, but its abilities when it comes to phone functionality are limited.

That’ll be changing soon, if the folks over at Android Authority are correct in thinking that OpenAI is currently working on functionality that’ll let Android users swap Google Assistant out for something a little smarter. In the ChatGPT Android app — version 1.2023.352 — it saw the addition of a new activity going by the name “com.openai.voice.assistant.AssistantActivity”. It’s automatically disabled, but turn it on, and it’ll activate the same sort of overlay that turns up when activating your default assistant from any screen.

Launching the feature won’t net any results before it closes and hides its true nature. That, and a new XML file added in the latest version of the app named “assistant_interaction_service” and a whole host of other too-codey names we won’t list here, all implying some half-baked feature that’s still in the throes of development. That’s fine. We can wait.

We just… won’t be waiting very long. OpenAI isn’t the first to be struck with the idea of turning its AI into an assistant, and it won’t be the last. There’s no official word on when we can expect the feature to land, but we’re guessing it’ll be before the likes of Google and Microsoft’s entries — especially considering the former’s announcement of Assistant with Bard back in October 2023.

Source

Copilot is all keyed up

Microsoft Copilot key (LS

If the unceasing news surrounding Microsoft’s foray into generative AI wasn’t a big enough clue, the company is more than eager to get a move on artificial intelligence. Not only did Copilot — previously known as Bing Chat — just hit Android and Apple’s respective app stores, but the generative AI is getting its very own landmark on future Windows PCs.

Specifically, it’s getting a custom Copilot key, right alongside the Alt and Arrow key if the promotional video is anything to go by. Rather than allowing users the chance to try out Copilot for themselves, Microsoft is going the U2 route, shoving tech that might’ve otherwise been ignored down our throats from the get-go.

Microsoft announced the change in a blog post yesterday, noting that the key would be introduced later this year, and will be “ushering in a significant shift toward a more personal and intelligent computing future where AI will be seamlessly woven into Windows from the system, to the silicon, to the hardware,” before noting that the new addition is one of the more significant departures made to the keyboard in over thirty years.

Dell’s incoming line-up of XPS laptops will be the first to get the new key — which can be found standing in the shoes of the right-hand-side CRTL key, lodged in between the Alt and arrow keys. It feels like an all-too-calculated change that was designed to confuse users and garner the company an onslaught of clicks for the quarterly earnings.

Enter the Xbox Series S… toaster?

Xbox Series S toaster (LS: ChatGPT)

Xbox isn’t the most serious of companies in the gaming space. Where other companies like PlayStation prefer to adhere to strict design guidelines (to the PS5’s detriment), Xbox likes to have a little fun. A little over two months ago, it was raffling off a glorious Diablo IV-themed Series X that we’d probably do some less-than-savoury acts to get in our hands. That’s not even mentioning the million or so other themed consoles, controllers, fridges, and now, toasters.

Yes, you read that correctly. Toasters. Xbox — the company that just spent $80 billion on a game company — is selling toasters. Ones that look like the company’s Xbox Series S console and will even imprint the iconic logo onto whichever piece of bread gets the honour of sitting in the Series S toaster. There’s just one problem: it’s a Walmart exclusive. Even if you could perform some wizardry to get it across the border, it’s sold out everywhere.

If a toaster and fridge aren’t enough to complete your home’s makeover, the appliance’s release is reportedly part of a collection of Xbox-branded items including noodle bowls, mouse pads, pen holders, and storage boxes. When and where these will be hitting shelves, we can’t say.

Source

NASA’s next mission involves sending your name to the Moon (really)

NASA VIPER boarding pass (LS

If you aren’t already involved in NASA’s inner workings, there’s a good chance you won’t be included in the company’s VIPER mission — which will send out the agency’s first-ever robotic moon rover. Unless you’re one of the few to get their names aboard the rocket ship, that is. Newbies to NASA might not know that it’s grown fond of attaching the names of interested youths (and 23-year-old journalists) before blasting those rockets up to space.

The VIPER (Volatiles Investigating Polar Exploration Rover) mission will be the next to take a list of names with it, as it attempts to scour the Moon’s South Pole in search of water ice. The VIPER robot can measure the location and concentration of water ice, with successful scouting promising to alter how we carry out long-term space missions in the future.

“VIPER represents the first resource mapping mission on another celestial body and will deepen our understanding of how frozen water and other volatiles are distributed on the moon, their cosmic origin, and what has kept them preserved in the lunar soil for billions of years,” NASA said.

As for getting your name on board, the process is as simple as heading to this here webpage, entering your name and a PIN code that’ll give you access to a ‘boarding pass’ that’ll be needed ahead of the official launch, currently pencilled in for November 2024. So far, around 13,000 people have signed up for the project.

Source

]]>
‘Hallucinate’ was the perfect word for 2023 https://stuff.co.za/2024/01/04/hallucinate-was-the-perfect-word-for-2023/ Thu, 04 Jan 2024 13:27:25 +0000 https://stuff.co.za/?p=187991 Cambridge Dictionary picked “hallucinate” as its word of the year for 2023. It’s not a throwback to the acid-popping sixties, but the bizarre choice for when this new generation of AI chatbots, well, makes shit up.

But, as Naomi Klein wrote in May, “Why call the errors ‘hallucinations’ at all? Why not algorithmic junk? Or glitches?”

I can‘t agree more. Talk about cultural appropriation. The hippies would sue if they, well, like, were able to get it together after all these years.

Hallucination, says Klein, refers to the “mysterious capacity of the human brain to perceive phenomena that are not present, at least not in conventional, materialist terms”. But the real “warped hallucinations” are being had by the tech CEOs who “unleashed” AI chatbots.

“These folks are just tripping,” Klein writes. “Generative AI will end poverty, they tell us. It will cure all disease. It will solve climate change. It will make our jobs more meaningful and exciting. It will unleash lives of leisure and contemplation, helping us reclaim the humanity we have lost to late capitalist mechanisation. It will end loneliness. It will make our governments rational and responsive.”

Hallucinate is an apt word for last year, for other reasons, and certainly in South Africa, where #loadshitting blackouts were the worst they’ve ever been. It’s a word that sums up how so many of us felt about the things we saw last year. Surely, they weren’t real, we say to ourselves because they were so surreal.

In the same week of December that the Sherrif of the court arrived at Luthuli House to collect on a R150 million debt judgement, the broke ANC’s cabinet ministers announced a $200 million (R3.8 billion) deal with sanctioned Russian bank Gazprom and then a 2,500MW nuclear deal. Days later news broke that the ruling party’s MPs would rush through the Electoral Matters Amendment Bill over the Christmas period. It will also have public hearings at the same time as contentious new home affairs legislation, meaning less time for public participation, according to News24.

What does this bill enable? The broke ANC’s president – also the country’s president – will be able to set the limits for donations to political parties and from what amount these must be publicly declared.

I feel like we’ve experienced an economic coup – all to keep the ANC in power and to pay off its R150 million debt before this year’s election. Say it ain’t so.

As Google CEO Sundar Pichai said last year: “No one in the field has yet solved the hallucination problems.”

Indeed, generative AI isn’t always as good as it seems. Apart from hallucinations, the video launch of Google’s new AI engine called Gemini – which “highlights some of our favourite interactions with Gemini” according to the official demo video – was “faked,” TechCrunch concluded.

What seemed like a smooth video was actually many still images, with Google admitting “We made a few edits to the demo (we’ve been upfront and transparent about this)” – something TechCrunch points out the search giant did not admit until Bloomberg noticed.

Gemini underpins Google’s Chatbot called Bard, which made a mistake when it was launched last February and saw $100 billion wiped off its share price. Bard’s mistake – not knowing when the first photograph of a planet outside of our solar system was taken, which was 19 years ago – isn’t a hallucination. It‘s just a mistake.

]]>