Our vision for safe AI

At our school, we believe that artificial intelligence can be a powerful educational tool – but only when it is used safely, respectfully and with supervision.
That’s why we have created a clear system of rules, methodologies and technical measures that protect students, teachers and families.

This page aims to clearly explain how we keep children safe when using AI, what we teach, what we do in school processes and how parents and teachers or the wider community can support safe digital habits at home.

Basic principles for safe use of AI in our school

  1. No Pupil Personal Data in AI Systems
    We do not enter pupil names, parents’ names, addresses, classes, photographs, family situations or any identifiable information into AI tools.
  2. No photos of children or teachers in generative AI
    AI models view photos as data packets that they can analyze in detail. That’s why we don’t upload them to school AI at all.

  3. We use AI for educational purposes only
    All AI activities in the school have a methodology, a goal and pedagogical guidance. AI never does the work for the pupils.

  4. We only work in safe, school-based AI environments
    We use proven tools with zero-data-retention settings: Google Gemini for schools, Microsoft Copilot for schools, Magicschool. These services do not use pupil data for training and meet the necessary security standards.

  5. Transparency with parents
    Every use of AI in school is clearly communicated to parents.
    The parent always knows when, how and what AI is being used for.

How we protect pupils' data and privacy

Our school follows the recommendations:

  • Ministry of Education of the Slovak Republic
  • European Union – AI Act (2024-2026 implementation)
  • OECD Guidelines for AI in Education
  • and more

At school, we follow the rules:

  • We do not store any children’s data in external AI systems
  • Texts, assignments or projects remain in the school’s Google Workspace account.
  • We always use AI without sending personal data
  • Everything we enter into the AI is anonymous and generic.
  • Teachers get training on AI safety and digital ethics
  • We have our own training for teachers at the school as part of the Digital School Transformation project.
  • We use each AI tool only after approval by the school management
  • Safety is the priority, not the trend.

How we teach students to use AI safely

We teach children in computer science lessons, ethics education, project teaching and through cross-curricular activities:

  • Recognize the risks of the online space: deepfake videos, false information, fake profiles, manipulation, generated content.
  • Do not provide personal information or photos. Not to AI, not to apps, not to social networks.
  • Verify the facts – AI can hallucinate. We don’t tell children that AI “always knows the truth”.
  • Think critically and don’t give in to emotions. Children learn to react slowly, not impulsively.
  • Use AI as a helper, not as a substitute for thinking. AI explains but does not perform tasks for students.

How parents can help at home

  • Let children use the AI in the common area (not behind the door).
  • Explain to them that AI is neither a friend nor a diary.
  • Teach children the rule “Think twice, send once”.
  • They can always come with a question or a problem – without fear.
  • Don’t let kids upload photos to AI apps.
  • Keep track of what apps they have installed.
Upcoming events

There are no upcoming events at this time

Cookie Consent with Real Cookie Banner