The use of AI solutions in the work of a lawyer – an opportunity or a threat?
A practical workshop on the responsible and effective use of generative AI (LLMs) in legal practice — from analysis quality and common model errors to confidentiality and professional ethics compliance.
Generative artificial intelligence — particularly large language models (LLMs) — is increasingly used in legal practice, including legal research, contract analysis, and drafting documents. At the same time, these tools continue to raise questions about their reliability, limitations, typical failure modes, and compliance with professional ethics, legal privilege, and data protection requirements.
During ICAIL 2026 (International Conference on Artificial Intelligence and Law), the Legal & Data Protection team at Sano invites participants to a handson tutorial dedicated to the use of AI and LLMs in legal work.
The goal of the tutorial is to demonstrate how AI systems actually behave in legal tasks — what kinds of problems they can solve, where they generate risks, and how they should be supervised in a professional legal environment. The tutorial will also provide an opportunity to discuss responsible use of AI solutions within an organisation.
Format
Hybrid format (on‑site with an option to participate online — please remember to indicate your preferred mode of attendance)
- Duration: half‑day (4 hours)
- Number of participants: max. 30
Why now?
Generative AI — especially large language models (LLMs) — is already being used in legal practice and within the justice system, often without coherent guidelines or safeguards. Examples from around the world show numerous AI deployments carried out in parallel with a lack of formal rules for working with chatbots, which increases the risk in high‑stakes legal tasks.
Our workshop addresses this gap by shifting the focus from “theoretical possibilities” to the actual behaviour of AI systems in legal tasks.
Why should you join?
Participation in the tutorial will allow you to:
- gain a better understanding of the limitations and risks associated with the use of AI in legal work,
- acquire practical skills in critically assessing the outputs generated by language models,
- organise and systematise your knowledge on responsible and secure AI use in legal environments,
- view the development of legal tech from a practice oriented perspective rather than a purely technological one,
- exchange experiences related to the practical use of AI in legal practice,
- engage in a discussion on the need for regulation and responsible AI implementations in compliance and legal operations,
- expand your understanding of the benefits and risks of using AI in the legal profession, with particular emphasis on data security and professional secrecy.
Objectives and Expected Outcomes
Educational outcomes:
- You will understand how prompt precision affects the quality of legal analysis and model outputs.
- You will practise identifying, assessing and correcting AI generated errors.
- You will develop research and legal opinion drafting skills supported by AI, including critical verification of results.
Professional and societal outcomes:
- You will learn the principles of responsible AI implementation in line with professional ethics, confidentiality, and data protection requirements.
- Together, we contribute to creating “soft law” — shared principles for the use of AI, fostering trust in AI supported services.
Who is this workshop for?
- lawyers and legal advisors interested in the responsible use of AI in professional practice,
- law students and trainee attorneys who want to better understand legaltech tools,
- researchers and PhD candidates in the field of AI & Law seeking practical perspectives,
- representatives of public institutions, regulators, and professional bodies.
No advanced technical background is required. The tutorial is suitable for participants at the beginner and intermediate levels, while also offering indepth content for those interested in model evaluation, error analysis, and AI governance in the legal domain.

Anna Kajda-Twardowska is an attorney-at-law with many years of experience in providing legal services to entrepreneurs in various industries including the IT and new technology industry in all aspects of their operations and litigation. She is also a certified Approved Compliance Officer, a certified Lead Auditor of the ISO/IEC 42001 standard for AI management systems, and New Technology Law postgraduate. She specialises in contract law, commercial law and business law, with a strong emphasis on intellectual property (IP) protection, counselling on issues including IP protection, MedTech, and areas of emerging technology related to AI. Currently, she is a Legal Counsel at Sano – Centre for Computational Personalised Medicine – International Research Foundation.
Linkedin E-mail: a.kajda@sanoscience.org

Wioletta Niwińska is a qualified Polish Attorney, well-versed advising clients on legal issues in all aspects of business operations including corporate law, contract law, commercial law, business law, IP protection, particularly in the new technologies industry, including AI, machine learning, MedTech. She has gained experience working in international and national law firms, the public sector (at the National Centre for Research and Development) and the private sector. She is a certified Lead Auditor of the ISO/IEC 42001 standard for AI management systems and New Technology Law postgraduate and is a holder of a postgraduate degree in medical law and bioethics. She is a Legal Counsel in Sano – Centre for Computational Personalised Medicine – International Research Foundation. As an active participant in the Working Group on Artificial Intelligence under the Polish Ministry of Digital Affairs and a participant and speaker at conferences and industry events, she combines practical knowledge with the latest trends in legal regulations concerning artificial intelligence.
Linkedin E-mail: w.niwinska@sanoscience.org

Michał Kosobudzki is an experienced lawyer specializing in data protection law, privacy, and modern technologies, with a particular focus on artificial intelligence. He is a certified Lead Auditor of the ISO/IEC 42001 standard for AI management systems, an internal auditor of the ISO/IEC 27001 information security standard, and holds the CCP1 certification as a Compliance Officer. With years of work in the public, private, and academic sectors, as well as with non-governmental organizations, he possesses unique expertise in legal consulting, regulatory implementation, and training in the field of emerging technologies. As a member of the International Association of Privacy Professionals (IAPP) and an active participant in the Working Group on Artificial Intelligence under the Polish Ministry of Digital Affairs, Michał combines practical knowledge with the latest trends in AI-related legal regulations. He currently serves as the Data Protection Officer at Sano.
Linkedin E-mail: m.kosobudzki@sanoscience.org
Tutorial organiser
Sano – Centre for Computational Personalised Medicine – International Research Foundation.
Sano’s mission is to advance excellent science that leads to improved medicine and healthcare services, ultimately having a positive impact on people’s well-being through the application of computational and data-intensive methods. Sano is an independent research foundation, established with the financial support of grant from European Commission and other european and polish funds.
These grants enable Sano to pursue cutting-edge research and innovation in the healthcare sector, fostering a collaborative environment for scientists and researchers to make significant advancements in medicine.
Sano is built by two branches: Research Teams and Support. We proudly represent this second part. Sano has five Research Teams: Personal Health Data Science, Computational Neuroscience, Extreme-Scale Data and Computing, Medical Imaging and Robotics, Structural and Functional Genomics Group. There are some plans to develop new research teams.
Related Events / Previous Editions
Krakow Conference on Computational Medicine 2025 (15–17 October 2025, Kraków).
ICAIL 2025: Implementing Intelligence: Legal Challenges in Creating AI Solutions — a platform for knowledge exchange (June 2025).
FAQ
Do I need to know anything about AI?
No — the event is designed for beginners and intermediate users, with additional indepth elements for those interested.
Will there be practical exercises?
Yes — we will test AI solutions on legal tasks and analyse errors.
Will you address issues of professional secrecy and data protection?
Yes — this is one of the core pillars of the programme.
How long does the workshop last and how is the time structured?
The workshop runs for four hours and includes a balanced mix of expert input, live demonstrations, handson tasks, and guided discussion. Short breaks are planned to help participants stay focused and engaged.
Do I need to bring my own laptop?
Yes, we recommend bringing a laptop so you can fully participate in practical exercises.