Fraud and copying are a problem as old as humanity, or (at least) writing itself. Artificial intelligence tools have made cheating easier, or they have made our ability to detect cheating more challenging. Instead of focusing on those who cross the ethical line, we should focus on students who use artificial intelligence to develop themselves, according to AI Ambassador Petr Koňas. When can we talk about the misuse of AI, and when about AI as a good helper? And how can the new AI@VUT website help students and employees?
Half a year ago, you became an AI ambassador of BUT. What has changed since then?
I believe a lot, and I hope it shows. A series of lectures and trainings was created, in which we tried to address various groups: from students, through researchers and teachers to administrative employees. And I have to say that it has had a great response, especially with the latter, because they feel the potential that AI can significantly simplify their work. The first draft of the semi-automated preparation of reports from business trips, which we talked about last time, has already been created. We also create tools that should help us create grant applications or generally manage the project agenda. And a relatively recent thing is the AI@VUT portal (link ai.vut.cz ), where we try to concentrate information so that everyone who remembers AI at BUT knows where to go.
You have set yourself the difficult task of more centralizing the spontaneous development of AI at the university, so that, for example, the same tools are not created more than once. Is it successful?
We are now at a stage where I think there is no other way for us than to be a little spontaneous. If people want to do something, they have no choice but to find tools and paths in their field that work. On the other hand, I would like there to be a point where we would collect what has already been tested and thus reduce the demands that are related to the aforementioned redundancy. Therefore, I try to work as a kind of concentrator. And it seems that it is starting to work; people are contacting me. But having a closed list of tools where nothing else would be acceptable seems unnecessarily binding to me. I think that a reasonable compromise and the right way to go is to give people enough freedom and, at the same time, warn them of the risks that arise from too much benevolence or data leakage.
How big is the interest in AI at BUT?
Big and I'm very happy. Most of our trainings enjoyed participation from sixty people upwards and at the last one we had over a hundred participants. This was the area of AI agents, which resonates quite a bit with the functionality that AI has been offering in the last six months. A lot of people see the potential in them to automate the processes they do.
BUT is not only a technical university, but also an artistic university; we have the Faculty of Fine Arts and the Faculty of Architecture here. What can AI offer to artistic disciplines that are primarily based on human creativity?
Everyone at the university is dealing with the problem of bureaucracy, and this also applies to the aforementioned faculties. But I guess I know where you're heading... That's a terribly difficult answer. I always remember Plato's cave, when a person locked up in a cave all his life thinks it's the whole world. In other words, locking ourselves away can sometimes lead to our not recognizing what is really the content of our world and what is beyond it. I believe that if art schools can open up to the field of AI, it will be beneficial for them. And I have to say that we can see such an effort at the mentioned faculties.
There is also a lot of talk about the possible risks of AI. I don't want to discuss the risks to humanity now, but rather ask purely practically: what do we all have to watch out for when we use AI?
I would like to emphasize one thing that probably comes to everyone's mind, but still many people have given up on it: data security. The cardinal question that everyone should always ask themselves is: "Am I reconciled to the fact that anyone can see these data?" Because as soon as the data leaves the university, perhaps by uploading to the cloud, they basically become public; We have already encountered many times that the service provider claimed something or stated something in the terms and conditions, but there was still a leak, the provider traded with the data, and the data was lost from this point of view. I don't want to sling mud at the providers, but they really care primarily about profit, not our privacy. They use privacy only as an argument for further increase in profit. We should think about what their goal is, and that the goal is definitely not our protection. If we approach it this way, we may be a little more careful.
Another thing is psychological: we are increasingly building an addiction to AI. As with everything, one should look for a reasonable approach, not use it 24/7. AI is an interesting tool, but it is not all-saving and can overwhelm us.
And the last one is the geopolitical risk. For example, with Chinese models, which are often of very high quality, we cannot be sure what information they have been trained on. Or whether the engines on which they run cannot be used to extract or delete data, or to implement malicious software into your computer.
In this context, are there any commonly available recommended AI tools for BUT, and, on the other hand, a blacklist of prohibited tools?
The CIS lists available tools, which is continuously updated (only after logging in to the BUT account, editor's note). These certainly include Copilot, ChatGPT, and Gemini – it is worth mentioning here that Google has recently opened the possibility for students to use Gemini for free for a year, and I definitely recommend getting acquainted with it, it is truly the number one language model today.
There is no blacklist of explicitly banned models. However, the National Cyber and Information Security Authority issued a warning this year about some products of the Chinese company DeepSeek, which the government subsequently banned from use in the state administration. I would describe them as strongly not recommended, and I would rather avoid them.
We study for ourselves, AI is just a mentor
The aforementioned AI@VUT portal has recently been launched. What can we find there, whether we work or study at BUT?
Now there is the absolute basis of information related to AI. This means that we have a decree that determines how to work with AI, where, and how we can use it. There is a signpost on AI tools, whether commercial or open source. There you will find links to a lot of tutorials, manuals, and training. It's a basic signpost.
We are now working on partial instructions directly for specific groups. For example, we have a page for students that addresses their most frequently asked questions and offers links to suitable training sessions. We answer how AI can help students create their work, what they can and can't do, what role the teacher plays in assigning work, and where they get into a grey zone or over the edge when using AI. We plan to add to this, for example, the form of citations for language editing, such as proofreading and stylistic editing using AI, which we are still missing. It would make sense to me to have them declaratively stated in advance in a uniform form, so that students do not have to formulate them themselves.
You touched on an important area, and that is ethics. For example, students will find a checklist on the website, where they can check the involvement of AI in the creation of their work. What should I ask myself when I use AI in my studies?
First of all, I want to say that I am terribly glad that most teachers do not take students as those who a priori try to cheat. I think this is important because if students feel supported, then we look for ways to use AI wisely. At the same time, it is a mistake to think that cheating and copying began with AI. If a teacher has been giving the same assignments for ten years, then students have had plenty of opportunities to build on older work and share this information. The question of ethics, cheating or abuse has always been here, and as a rule, the same students who have been looking for an answer to Seminárky.cz so far are more likely to abuse AI. We, as educators, should focus on how to support those students who are trying to use AI in an effective way and for their own development. For example, thanks to AI, I can assign work that is much more complicated, because I know that students can easily find information.
So, when can we talk about the misuse of AI and when about AI as a good helper?
| You can read an older interview with Petr Koňas "AI shows us the image of ourselves" from July 2025 here. |
It is clearly explained on the website that if I use AI as a substitute for my own work, which is expected of me, I find myself over the edge. And I'm dealing with an ethical dilemma, why am I actually studying here, if not for myself? Because as a student, I am supposed to develop in some area, and AI is supposed to help me as much as possible, for example, if I don't understand some material. The moment AI replaces me and writes a term paper for me, it is crossing a red line.
Detecting such works is difficult; there is no reliable tool that can distinguish an AI creation from a human one. And with the advancement of the models, it will only get worse. But I notice that teachers are quite suspicious, and as soon as they have the impression that the idea does not come from the students themselves, they start asking questions. My simple advice to students is: if you want to avoid the teacher asking too many questions, try to handle the issue yourself, create yourself. AI can be a mentor, an opponent, or a teacher. Then you will save yourself a possible ethical dilemma, either on the part of the teacher or with your conscience.
So, isn't AI also a kind of challenge and a mirror of whether the things we do make sense? Because if AI can work on the work with one click, the question is whether it makes sense to assign it to students these days...
That's right. Educators should focus on what space to give AI to move the subject forward. This means finding a niche that would point to a new area of research or another shift, for example. Then we will not have topics that have been repeated for ten years; on the contrary, new ideas can arise. I try to show my colleagues that there are ways, such as metaprompting, where I try to ask the AI how to ask in order to achieve a goal. This way, as an educator, I can help myself create sets of questions or topics that will be unconventional, will be appropriately linked to AI, and will move us forward in some way.
This reminds me of relatively recent articles about AI, namely that advanced models are starting to take on the negative qualities of humans, and one of these features is lying and fabulation. It is no longer just a hallucination that is based on a probabilistic model. Suddenly, even if you implement backtesting, language models seem to be trying to find the path of least resistance, which will be the least “energy-demanding” for them. And it's amazing that they basically copy us humans. We also try to decide what is the optimum “energy” that we want to invest in whatever activity we do today and every day. But unfortunately, the effect is lost when we would like AI to be better than us. Instead, it becomes too perfect a copy of ourselves. And although I'm generally a techno-optimist, I'm not very optimistic about this. Humanity has been trying to improve for tens of thousands of years, and yet we have not found a universal way that would be unchanging over time and typical of humanity, except perhaps our ability to adapt. Perhaps there really is no absolute truth, but on the other hand, we should still strive for it. And if we can convince AI that striving for absolute truth is beneficial for it as well, we can be optimistic.