Guide

Using AI responsibly

Explore issues and considerations of responsible use of AI with your further education students. You can share our content to your own virtual learning environment (VLE).

Ethical issues around AI

People have mixed feelings about AI. Some are worried about how AI might change their lives, while others are excited about the ways AI could make things better. AI also raises important ethical questions—like what is fair or right—and we don’t have all the answers yet. Here we explore some of the key ethical issues.

Energy usage

AI systems require a lot of resources because they handle large amounts of data, which demands significant processing power. This high level of processing uses a lot of energy, not just to create the AI systems, but also to keep them running and updated as they continue to learn and evolve.

The increased energy use is a concern because it can contribute to climate change. There are also impacts on the environment where the processing takes place as large amounts of water are needed.

To address this, developers are working on ways to reduce the energy consumption of AI systems. For example, they are developing smaller models that can do the same tasks but use less power. Even with these efforts, it's important to consider the environmental impact of AI and find ways to minimise its effect on the planet.

As users of these tools, we should be aware of this impact and be mindful of how we use certain tools. For example, we know that large generative AI models in particular have high energy usage therefore when we use these, we shouldn’t generate lots and lots of content just because we can.

Unfair collection of data

Training AI models requires a lot of data, and in many cases, this data has been collected from the internet without asking the original creators for permission or paying them for their work. For example, an AI might learn by using articles, images, or even songs that people have created and posted online.

Once the AI model has been trained with this data, it's very difficult, if not impossible, to remove that data from the model. This means the AI keeps using what it learned from that data, even if the creators didn’t agree to it.

Because of this, many original creators want to be compensated for their work. They feel that if their creations are being used to train AI, they should get something in return. Some creators and companies are even suing the big AI developers. For example, stock image supplier Getty Images are suing Stability AI who make the image generator Stable diffusion. Getty argue that the Stable diffusion model has been trained on their copyrighted images. The rules about AI and copyright are still being worked out, so things might change in the future.

Some AI companies are trying to make sure their models don’t reproduce content exactly as it was found online. But for many creators, this isn’t enough. They believe their work should be protected, and they should be paid when it’s used to train AI systems.

Labour practices

When we think about AI, we often focus on the technology, but behind many AI tools are real people doing hard work, sometimes in unfair conditions. To create these tools companies need lots of data that has to be labelled or organised by humans. This work is often done by low-paid workers who spend hours doing repetitive tasks, like identifying objects in pictures or typing out what’s said in audio clips. These workers usually earn very little and often work in poor conditions without much job security or recognition.

Another issue is how some companies use workers to clean up AI-generated content. For example, if an AI tool creates offensive or inappropriate material, people are hired to remove it before anyone else sees it. This work can be tough because they have to look at upsetting things, and they often don’t get enough support or fair pay.

There is a need for greater transparency and better working conditions for the humans behind AI, ensuring that they are treated fairly and with respect.

Using AI to deceive

Generative AI content is becoming very realistic, which means there’s a concern that it can be used in harmful ways. For example, someone could fake an image and make it look like a real photo, or they could create a "deepfake" video where a person seems to say or do something they never actually did.

One example of how this could be used maliciously is if a scammer uses an AI tool to copy someone’s voice. The scammer could then use the fake voice to call that person’s family members and trick them into sending money to the scammer’s account, thinking they’re helping their loved one.

AI-generated content can also make scams look much more believable. We might see AI-created content on social media platforms like Instagram and TikTok that is designed to get our attention and might trick us into giving our personal information to a scam website or signing up for a service that doesn’t really exist.

The opposite is also a problem: because generative AI exists, people can also claim that real photos or videos are fake. This makes it harder to know what’s true and what’s not.

There are ways to try to identify AI-generated content, but these methods aren’t perfect and as the technology is getting better and better, we can’t assume we’ll be able to tell what’s AI and what’s not.

There are also efforts from developers to label AI-created content made with their tools, but these methods aren’t common yet and we don’t know how well they will work.

For now, we need to be extra careful when we’re looking at content and make sure we evaluate it.

Key Points

  • AI systems use a lot of energy, which can harm the environment and contribute to climate change.
  • AI models often use data from the internet without permission, leading to concerns about creators not being fairly compensated.
  • Many workers behind AI tools face poor working conditions and low pay, raising ethical concerns.
  • AI-generated content can be used to deceive people, making it difficult to tell what’s real and what’s fake.

Downloadable content: Ethical issues around AI (.docx)

Share this content with your students by uploading it to your own virtual learning environment (VLE).

Being responsible users of AI

Use AI as your assistant

It's important to use AI to support you, not take over your work, because AI can make us better at what we do, but it doesn't create or approach tasks in the same way humans do. When we work together with AI tools, we can combine their strengths with our own creativity and judgment to achieve the best results. This way, we stay in control and make sure the things we create are meaningful and accurate.

Let’s say you have a big essay to write for your class, and you’re feeling a bit overwhelmed by the topic. Instead of starting the research and writing process yourself, you decide to use an AI tool to handle the entire thing. You type in the essay prompt, and within minutes, the AI generates a complete essay for you. It might seem like a lifesaver, especially if you’re short on time or struggling with the topic.

However, because the AI tool is doing all the work, you haven’t spent any time understanding the material or forming your own ideas. The essay might be well-written, but it lacks your personal voice and insight. You also don’t know if the AI has used accurate information or if it has included details that your teacher would expect you to discuss.

When you submit the AI-generated essay, you might get a decent grade or you might have submitted something worth no marks at all. Either way you haven’t actually learned anything. Worse, if your teacher asks you to explain your thoughts or expand on certain points, you could struggle to respond because you didn’t engage with the content yourself. Over time, relying too much on AI tools in this way can make it harder for you to develop your own writing and critical thinking skills, which are essential for success in college and beyond.

Using AI as a tool to help with ideas, research, or proofreading can be really helpful but make sure you’re the one actively learning and shaping your work.

Look for bias

We might know how bias can enter the model through the training data, development decisions and user feedback.

As users of AI systems and tools we need to check the outputs of the tools we use for bias:

  • Writing an Essay: If you use AI to help you write an essay or come up with ideas, make sure the information it gives you is fair to all sides. For example, if the AI suggests a point of view, check that it isn’t unfairly favouring one group of people or one side of history.
  • Researching for a Project: When you use AI to find information for a project, look at whether the AI is showing you different viewpoints. If it only gives you information from one perspective, you should look for other sources to make sure you’re getting a full picture.
  • Creating Visual Presentations: If you use AI to help design slides or choose images for a presentation, check to make sure the images are fair and include different types of people. For example, if the AI suggests pictures of leaders, make sure it’s not just showing one gender or one type of person.

Protect your (and others) data privacy

AI tools often learn and improve by using the information that people put into them. This means that when you type something into an AI tool, like a chatbot, what you write could be saved and used to help train the AI to get better at answering questions in the future.

However, it's important to remember that once your data is used to train the AI, that information becomes a part of how the AI works. This makes it very hard, or even impossible, to remove specific pieces of data from the AI after it’s been trained.

That’s why you need to be careful about what kind of information you put into any AI system.

Be careful about sharing your own personal information

Avoid putting things like your full name, address, phone number, or other private details into AI tools. Once the AI model learns from it, you probably won’t be able to get that information back or delete it.

Be extra careful about other people’s personal information

If you are thinking of putting someone else’s personal details into an AI tool, you should always ask them first and gain consent. It’s important to respect their privacy and make sure they’re okay with their information being used.

By thinking carefully about what you share, you can help keep your own and others' information safe when using AI tools.

How do you know if your inputs will be used to train an AI model?

You can check the terms and conditions of an AI tool to see if they use what you type or upload to train their models.

Sometimes, you might be able to choose whether you want your inputs used this way. Look in the terms and conditions or the settings of your account to find out.

Some services let users with paid accounts opt out of AI training as a special benefit. This means that if you use the tool for free, you might be "paying" with your data instead of money. Think about what you're okay with before deciding and still be careful about inputting any personal data.

Fact check

Generative AI tools can sometimes output information that sounds correct but is actually wrong. This mistake is called a “hallucination.” These tools can also get confused and give the wrong answers, especially if we don’t give them enough details or clear instructions.

As someone using these tools, it’s important to check if the AI’s answers are correct. This might mean doing some extra research to make sure the information is right, or asking a teacher or someone who knows a lot about the topic to help you verify the answers.

Acknowledging use of AI

As AI technology becomes more and more advanced it’s getting harder to tell when AI has been used. For example, think of how realistic AI-generated images have become in just the last two years.

-picture comparison 2022-2024-

To be responsible users of AI, it’s important to be honest about when and how you’ve used AI tools in your work. For example, if you used AI to help you write a report or create a piece of art, you should let people know.

Your school or college might have specific rules about how to acknowledge the use of AI in your work, so make sure to follow those guidelines.

Key Points

  • Consider AI tools as helpful assistants for your work and not as a way to avoid doing the work yourself. This is the more effective way to use these tools too – they are great helpers but poor replacements for human creativity and intelligence.
  • Being responsible means checking the AI tools outputs, whether this is recommendations made or content generated.
  • Remember to check for potential biases and for accuracy - actively include a wide range of perspectives in your work.
  • Avoid putting any personal information into AI tools and respect other’s privacy by asking for consent before putting anyone else's information in too.

Downloadable content: Being responsible users if AI (.docx)

Share this content with your students by uploading it to your own virtual learning environment (VLE).

Activities and questions

Discussion questions:

  • Do you think AI can help you be more creative, or do you think it might stop you from coming up with your own ideas?
  • How can you use AI to help with your creativity without losing your own voice?
  • Why might you not want an AI tool to be trained on the data you give it?
  • Are there situations where you would want an AI tool to be trained on the data that you give it?
  • What does it mean when we say AI can be "biased," and how can you tell if something the AI gives you is unfair?
  • Why is it important to let others know when you’ve used AI?

Multiple choice questions:

  • What should you do if you think an AI tool has provided a biased viewpoint while helping you research for an essay?

A) Accept the AI's output because it’s based on data.

B) Check the information to ensure it is fair and not favoring one side unfairly.

C) Ignore the bias and use the information in your essay.

D) Assume the AI cannot be biased because it is a machine.

Answer: B) Check the information to ensure it is fair and not favoring one side unfairly.

Feedback: Correct! It’s important to review AI-generated content for bias to ensure that the information is balanced and fair, reflecting a full and accurate perspective.

  • What is a responsible way to handle personal information when using AI tools?

A) Always provide as much personal information as possible to improve AI accuracy.

B) Share personal details only with AI tools that guarantee data privacy.

C) Avoid sharing your own or others' private details in AI tools to protect privacy.

D) Trust that AI tools will automatically protect all personal information.

Answer: C) Avoid sharing your own or others' private details in AI tools to protect privacy.

Feedback: Correct! Being careful about what personal information you share with AI tools helps protect your privacy and the privacy of others, as once data is used to train AI, it may be difficult or impossible to remove.

  • What is an important step to take when using AI-generated information?

A) Trust the AI completely and use the information without checking it.

B) Fact-check the information to make sure it’s correct.

C) Assume that AI-generated information is always more accurate than human-generated content.

D) Only use AI-generated information if it supports your existing opinions.

Answer: B) Fact-check the information to make sure it’s correct.

Feedback: Correct! AI tools can sometimes produce incorrect or misleading information, so it’s important to verify the accuracy of AI-generated content through additional research or by consulting experts.

  • How can you find out if your inputs are used to train the AI model behind a tool?

A) By checking the tool's terms and conditions or privacy policy.

B) By asking the AI directly during a chat session

C) Don't check - assume that all AI tools use inputs for training without exception.

D) By looking at the AI tool's logo or branding.

Answer: A) By checking the tool's terms and conditions or privacy policy.

Feedback: Correct! The terms and conditions or privacy policy of the AI tool will usually explain how your inputs might be used, including whether they are used to train the AI model. It’s important to read these details to understand how your data is being handled.

  • Why might you not want an AI tool to be trained on the data you give it?

A) Because the AI might accidentally share your data with other users.

B) Because you might give it bad quality data that will confuse it

C) Because once the AI is trained on your data, it might be impossible to remove it.

D) Because the AI might forget the information after a while.

Answer: C) Because once the AI is trained on your data, you might not be able to remove it.

Feedback: Correct! If you let an AI tool use your data to train itself, that data becomes part of how the AI works, and it may be very difficult or impossible to take it back or delete it later. This is why you should think carefully about what information you share.

Ethical issues around AI

Activities

AI Ethics Debate: Divide the class into two groups and assign each group a stance on the ethical implications of generative AI content. One group will argue in favor of stricter regulations and limitations on AI content generation, while the other group will argue for more freedom and less regulation.

Students will research and prepare their arguments, considering factors such as privacy concerns, potential misuse, and the impact on society.

They will then engage in a class debate, presenting their viewpoints and countering the arguments of the opposing group. This activity will encourage students to explore different perspectives and develop their communication and persuasion skills.

Discussion questions:

  • Do you think it’s okay for AI companies to use things like pictures, songs, and stories from the internet without asking the people who made them? Why or why not?
  • How would you feel if something you created was used to train an AI model without you knowing?
  • What could be some fair ways to compensate creators whose work is used by AI companies?
  • How can we make sure the people who help build AI are treated fairly?
  • How do you think the increase in use of AI tools could affect the environment?
  • What are some ways we can reduce the effects of AI on the environment?
  • AI can make pictures, videos, and even voices that look and sound real but aren’t. How can we make sure we don’t get tricked by something that was made by AI?
  • If you saw a label that said something was made by AI, would it change how you feel about it? Why?

Multiple choice questions:

  • Why is the high energy use of AI systems an environmental concern?

A) Because it makes AI systems run slower.

B) Because it can contribute to climate change and use a lot of natural resources like water.

C) Because it means they take longer to build

D) Because it causes computers to overheat.

Answer: B) Because it can contribute to climate change and use a lot of natural resources like water.

Feedback: Correct! AI systems require a lot of energy, which can contribute to climate change and other environmental impacts, like using large amounts of water for cooling.

  • What are developers doing to reduce the environmental impact of AI systems?

A) They are shutting down all AI systems to save energy.

B) They are developing smaller models that can perform the same tasks but use less power.

C) They are asking users to only use AI systems at night when energy is cheaper.

D) They are switching to solar-powered computers to run AI systems.

Answer: B) They are developing smaller models that can perform the same tasks but use less power.

Feedback: Correct! Developers are working on creating smaller, more efficient AI models that use less energy, helping to reduce the environmental impact of AI systems.

  • Why is the way data is collected to train AI models sometimes considered unfair?

A) Because the AI models forget the data after using it once.

B) Because AI models don’t actually need any data to function.

C) Because

D) Because the data is often collected without asking the original creators for permission or paying them.

Answer: D) Because the data is often collected without asking the original creators for permission or paying them.

Feedback: Correct! Many AI models are trained using data taken from the internet without the creators' permission or compensation, which is why some creators feel their work is being used unfairly.

  • What is a "deepfake"?

A) A video or image that shows a real event happening.

B) A type of AI-generated content that makes it look like someone is saying or doing something they never actually did.

C) A harmless AI-generated cartoon.

D) A special kind of photograph that is created by professional photographers.

Answer: B) A type of AI-generated content that makes it look like someone is saying or doing something they never actually did.

Feedback: Correct! A deepfake is an AI-generated video or image that can make it look like someone did or said something they never actually did, which can be used to deceive people.

  • What are developers of AI systems doing to help people identify AI-generated content?

A) They are creating AI systems that remove all fake content from the internet.

B) They are banning the use of AI for creating content.

C) They are making AI-generated content less realistic.

D) They are working on ways to label content created by AI, though this isn’t common yet.

Answer: D) They are working on ways to label content created by AI, though this isn’t common yet.

Feedback: Correct! Some developers are trying to create labels for AI-generated content so that people can tell when something was made by AI, but this practice isn’t widespread yet.

  • What problem arises because of how realistic AI-generated content can be?

A) It makes all online content less enjoyable.

B) It makes it harder to distinguish between what is real and what is fake.

C) It helps people understand complex topics more easily.

D) It causes people to stop using the internet altogether.

Answer: B) It makes it harder to distinguish between what is real and what is fake.

Feedback: Correct! The realism of AI-generated content can make it difficult to tell the difference between what is real and what is fake, which is why it’s important to be careful when evaluating online content.

Downloadable content: Responsible use of AI - activities and questions (.docx)

Share this content with your students by uploading it to your own virtual learning environment (VLE).

Responsible AI Checklist

Use AI as your assistant, not a replacement

  • You can use AI tools to support your work but don’t let it take over completely or rely on it too much.
  1. Look for bias
  • Check AI outputs for fairness and avoid one-sided perspectives.
  • Ensure information and visuals include diverse viewpoints and representations.
  1. Protect your and others’ data privacy
  • Review the AI tool’s terms and conditions to understand if your inputs will be used to train the model.
  • Avoid sharing your personal information with AI tools and always get consent before inputting someone else’s personal details.
  1. Be upfront about your use of AI
  • Be honest about when and how you've used AI tools in your work, following any specific school or college guidelines.
  1. Fact-check AI’s outputs
  • Always check the information provided by AI with reliable sources or experts.

Downloadable content: Responsible AI checklist (.docx)

Share this content with your students by uploading it to your own virtual learning environment (VLE).

This guide is made available under Creative Commons License (CC BY-NC-ND).