🎯 Learning Objectives
- Learn why some content online can potentially be harmful
- Describe the UK laws governing online content
- Discuss why policing online spaces can be difficult
- Discover how to report illegal online content
💬 Key Vocabulary
- Computer Misuse Act (1990)
- Copyright Designs and Patents Act (1988)
- Digital Economy Act (2017)
- Fraud Act (2006)
- Malicious Communications Act (1988)
- Obscene Publications Act (1959)
- online crime
- Serious Crimes Act (2007)
- Online Safety Act (2023)
📖 Starter Activity – What is illegal in the UK?
How many different types of illegal online content can you think of?
Write these down in a new Word document.
📖 Which laws govern the internet?
In the UK, we don’t currently have one law that decides what is and is not illegal online.
Instead, something is illegal online if:
- It is illegal offline, OR
- It is included as an offence in one of many pieces of legislation that govern the internet in the UK
Let’s look at some of those pieces of legislation.
Computer Misuse Act (1990)
The Computer Misuse Act (1990) makes it illegal to access or modify data without the owner’s permission.
Not only does this legislation make it illegal to gain unauthorised access to data; it also makes it illegal to create tools that would allow others to commit this crime.
Copyright Designs and Patents Act (1988)
This legislation is designed to give the creators of digital media control over who can use it and how. Digital media covers everything from films and TV shows to software.
The act says that any online content you create, from a blog post to a photo you upload, is automatically copyrighted and so cannot be used without your permission. This means that how you use other people’s digital media is also restricted.
Digital Economy Act (2017)
The Digital Economy Act (2017) governs electronic communications infrastructure, i.e. the roles and responsibilities of service providers and who should be able to access what online.
Fraud Act (2006)
The Fraud Act (2006) describes the offence of fraud committed either through:
- False representation
- Failing to disclose information OR
- Abuse of position
The act also makes it an offence to possess, make, or supply tools for fraud.
Malicious Communications Act (1988)
The Malicious Communications Act (1988) makes it illegal to send indecent or grossly offensive communications, or communications that are threatening, false, or intended to cause distress to the recipient. The communication could be in the form of an email, a direct message, a post on a public forum, etc.
Some of these offences are also covered by the Communications Act (2003), which makes it illegal to send messages via any public electronic communications network that are grossly offensive, obscene, indecent, or menacing in character.
Obscene Publications Act (1959)
This act outlines what is considered in UK law to be obscene, and makes it illegal to create and disseminate this material.
The act was amended to specifically include sharing material in private online conversations, i.e. it is illegal to share obscene content, even if it is just between two people.
Serious Crimes Act (2007)
In the UK, serious offences include offences related to the trafficking of drugs, humans, and arms; sex work; child sexual abuse; money laundering; fraud, corruption, and bribery; blackmail; intellectual property offences; and environmental crimes.
The Serious Crimes Act (2007) makes offences of this nature illegal, as well as attempting, conspiring, encouraging, or aiding and abetting in a serious crime.
Online Safety Act (2023)
Overseen by Ofcom, it requires platforms to proactively remove material such as terrorism and child sexual abuse, and to implement age verification to block children from accessing harmful content like pornography. Non-compliant companies face severe penalties, including fines of up to 10% of global turnover and potential criminal liability for senior managers.
📝 Level 1 – Legislation matching exercise
Based on the discussion of UK legislation, which types of online content is made illegal by which laws?
Match the online content to the legislation on the worksheet below.
📖 How do you police the internet?
We have lots of laws that govern what can and can’t be shared online. But how are those laws enforced?
When enforcing laws, we have to consider:
- Who is responsible for enforcing the law
- What powers they have to so
- Who is held accountable when a law is broken
Let’s look at two different approaches to policing the internet.
User-centred approach
In a user-centred approach, the responsibility for obeying the law falls on the creators and consumers of content. This approach is often enforced by the police, who target individuals for committing crimes.
| Pros | Cons |
|---|---|
| – The people responsible for the supply and demand of the illegal content are punished – Laws act as a disincentive to creating or consuming illegal content | – The creators of illegal content may be difficult to find or prosecute, especially if they are outside the jurisdiction of the police – Lots of people commit crimes related to illegal online content, so this approach requires lots of resources |
Facilitator-centred approach
In this approach, the responsibility for obeying the law falls on the hosts and facilitators of online content, i.e. the website and platform creators. This approach is often enforced by regulatory bodies, organisations specifically created to monitor the activities of a type of organisation.
| Pros | Cons |
|---|---|
| – By taking down websites or apps you can stop lots of crimes happening simultaneously – Incentivising platforms to police themselves helps to share the resource burden of enforcing online content laws | – It can be difficult to make large companies like Facebook and YouTube comply with the law – Approaches that target platforms not people can make the creators and consumers of illegal online content feel immune from the consequences of breaking the law |
📝 Level 2 – Policing the internet: scenarios
Get into pairs.
Consider the different scenarios on the Level 2 worksheet.
Who would be held responsible for the online content in a…
- User-centred approach?
- Facilitator-centred approach?
For each scenario, which approach do you think would be fairest, and which would be most effective?
📝 Level 3 – AI Image Generation
Recent advances in AI technology has enabled very realistic images to be generated by AI. This can be used for fun and useful things but it can also be used to create illegal images such as nude images of children and of people without permission.
For each of the scenarios below you need to write down in a blank word document:
- What is happening in this example?
- Why do you think this person decided to use generative AI in this way?
- What are the potential impacts of using generative AI in this way and for whom?
- What do you think will happen next?
Scenario 1
- A and B are friends but they recently got into an argument.
- A few days after they fell out, A finds out that B has been saying bad things about them in other group chats.
- A is annoyed, so decides to use a generative AI app to make a video of B, that makes them look nude and like they’re kissing someone else in their year group.
- A posts the video in a group chat that B isn’t in.
Scenario 2
- C’s friend messages them and asks them if they’ve seen the picture of their favourite streamer that’s being shared around online.
- C looks for it, and finds a picture of the streamer doing sexual things with someone else.
- A lot of the comments say that it’s fake and that it has been created using generative AI.
Scenario 3
- D has a bad hockey practice with their sports coach and isn’t allowed to play in the next match.
- D is annoyed and upset by this.
- D decides to screenshot an image of their teacher from the club’s website, and edits it using generative AI to make them look partially nude.
- D shares the image on social media.
Scenario 4
- E has been spending more time with a new friendship group.
- E is added into their group chat, where they are sharing nude images of their partners with one another.
- They ask E to contribute and send an image to the chat of their partner.
- E does not have a photo to share, and does not want to ask their partner to send them one.
- Instead, E decides to use an app to create a photo of their partner that looks nude.
- They send this photo to the group.
Scenario 5
- F has been bonding with an online friend for a few weeks, as they’ve both been experiencing bullying at school, which has included mean comments under posts, spreading rumours about them and DMing inappropriate content.
- F shares some selfies from their holiday with the friend.
- The next day at school, one of F’s classmates tells them they’ve seen a nude picture of F on various group chats. F is confused because they have never taken or shared a nude image.
- When F goes online later, their ‘online friend’ reveals they’re the same person who has been bullying them and they used the photos F had sent to create the nude image.
Scenario 6
- G is a big fan of a popular TV series and is part of a fandom community online.
- G has seen some artwork showing the characters from the series and decides to try making some of their own.
- They focus on a teenage romantic couple from the show and use generative AI to edit photos of the actors so that they are nude and look like they’re having sex.
- G shares the photos in an online forum about the show.
Scenario 7
- H was supposed to go on a trip to the beach with friends but had to miss it because H was unwell.
- Later in the group chat, H’s friends share a photo of them all at the beach.
- They have used generative AI to add H to the photo but H is wearing revealing swimwear.
- All of H’s friends are laughing and joking about the photo.
What do all or some of these scenarios have in common?
- They are all using generative AI to ‘nudify’ someone (change an image of someone to make them appear nude or naked).
- People did not give consent for their image to be used in this way.
- Some are examples of online bullying.
- All of these situations could have legal consequences.
📖 Consent
Consent means giving permission for something to happen
Consent should be:
Freely given – Not pressured
Reversible – Can change your mind
Informed – Given all the information
Repeated – A yes once isn’t a yes always
📖 True or False
Using generative AI to make someone appear nude or naked without their consent, and then sharing it around, is just as bad as sharing real nude images in the same way
TRUE – A generative AI nude image of someone created and shared without consent can still have a negative impact on that person.
Creating a nude image using generative AI but not sharing it widely is okay
FALSE – The image has still been created without that person’s consent, and is not okay. There would be legal implications for doing this.
If you can tell a nude image has clearly been edited or it is badly made, then it’s okay.
FALSE – Generative AI can often look particularly convincing, and it is sometimes difficult to know if the imagery is real. However, it doesn’t matter how ‘convincing’ an image is. Creating and sharing a nude image in this way is never okay.
If you can tell a nude image has clearly been edited or it is badly made, then it’s okay.
FALSE – Generative AI can often look particularly convincing, and it is sometimes difficult to know if the imagery is real. However, it doesn’t matter how ‘convincing’ an image is. Creating and sharing a nude image in this way is never okay..
Even if a nude image of someone under the age of 18 has been created by generative AI and isn’t “real”, it can still be illegal
TRUE – A nude image of someone under the age of 18 created by genAI may still break the law.
Posting pictures of yourself in revealing clothes is just making it easy for people to use generative AI to create a nude image of you
FALSE – Generative AI nude images can be created regardless of what someone is wearing in a picture. It is important not to blame or shame someone targeted by using generative AI in this way just because of what they are wearing in the original picture.
If a generative AI nude image is reported on a platform or to a trusted adult, it won’t be taken seriously because it’s not real
FALSE – Creating and sharing an AI nude image is never okay. It will likely go against an online platform’s terms and conditions, and the trusted adults in your life will be able to help and support you with this. We will look at how to best report this kind of content in the next lesson.
Once a generative AI nude image of someone has been shared, it’s out there now and nothing can really be done about it.
FALSE – Steps can be taken to help remove and reduce the spread of content like this online, especially if it is reported quickly. Reporting an image where it has been shared and using specialist services like Take It Down can help reduce the spread of content like this online.
📖 Reporting illegal online content
We can all help to report illegal online content. Depending on what kind of content you come across and where you find it, there are lots of different places you can report it to.
If you come across illegal content (such as obscene content, copyright infringements, fraudulent messages, hate speech, harassment, etc.) on a legal website (such as TikTok, Instagram, YouTube, or Snapchat) you can report the content to the website and they have to take it down.
If you are the target of the illegal content (for example if you are being harassed), you should take screenshots of the illegal content in case you need them for evidence if you want to also report the crime to the police.
Most websites have a clearly signposted reporting mechanism that can be found with the features that allow you to share, like, or comment on content.
If you can’t find the reporting mechanism, you should check the FAQ or About section of the site for advice.
📖 Other illegal content
If the website you report to does not remove the illegal content, does not have a reporting mechanism, or is itself facilitating the illegal content (for example websites created to host pirated films), there are lots of different places you can report it to. These include:
- The police (via your local police station, by calling 101 if the crime is not an emergency and 999 if it is, or online)
- Crimestoppers (to report a crime anonymously)
- www.gov.uk/report-terrorism (to report terrorist content)
- www.report-it.org.uk/home (to report a hate crime)
- report.iwf.org.uk/ (to report sexual images involving a young person)

🏅 Level Up
🥇 Level 1
- Upload your completed worksheet with the crimes matched with the correct piece of legislation to Teams.
🥈 Level 2
- Upload your completed worksheet with the user/facilitator tables filled in to Teams.
🥉 Level 3
- Upload your analysis of the 7 scenarios on AI and image generation to Teams.
In this lesson, you…
- Learnt about the different types of illegal online content and how they might be policed.
Next lesson, you will…
- Learn how algorithms can create a virtual bubble which means you only see things you already like and agree with.