Seminar

Laboratory seminars (PV273 in the course catalog) on Wednesday, 10:30 – 11:30, A505, FI MU, Botanická 68, followed by an informal group lunch.
The format of standard lectures: 30-40 minutes presentation + 15 minutes for questions. Slides are expected in English, and the presentation is in English or Czech, depending on the audience. For past seminars, go here.
- 18.2.2026
RNDr. Lukáš Hejtmánek, Ph.D.
The unbearable lightness of AI
Abstract: The CERIT-SC Center launched its AI large language model (LLM) inference service one year ago now. What started with comparatively small models—around 70B parameters and roughly 40GB of weights—has since evolved into a large-scale production environment. Today, we operate several models approaching 700B parameters and one 1000B-parameter model, all running on enterprise-grade DGX B200 and B300 hardware. This presentation highlights recent advances in AI with a particular focus on operating LLM inference services at CERIT-SC. We share practical experience gained over the past months, including architectural decisions, operational challenges, and key lessons learned while scaling from early deployments to state-of-the-art infrastructure. Beyond inference itself, we present the broader ecosystem of AI-enabled services built around these models. This includes agentic systems such as n8n, integrations with developer tools like VS Code, terminal environments, and Jupyter Notebooks, as well as the MCP servers we operate. We explain how these components fit together, why they form a critical part of the overall system, and how they support user-facing services such as chat.ai.e-infra.cz. Finally, we demonstrate how this AI ecosystem translates into real productivity gains, accelerating everyday workflows and reshaping how we approach research, development, and operational tasks. - 25.2.2026
doc. Mgr. Pavel Rychlý, Ph.D.
TBA - 4.3.2026
RNDr. Václav Sobotka
Key principles in cross-domain hyper-heuristics
Abstract: Cross-domain selection hyper-heuristics typically focus on adaptively selecting search operations, so-called low-level heuristics (LLHs), from a predefined set. In contrast, we concentrate on the composition of this set and its strategic transformations. We systematically analyze transformations based on three key principles: solution acceptance, LLH repetitions, and perturbation intensity, i.e., the proportion of a solution affected by a perturbative LLH. We demonstrate the raw effects of our transformations on a trivial unbiased random selection mechanism. Additionally, we accompany several recent hyper-heuristics with such strategic transformations, often effectively simplifying their designs. Using this approach, we show the three aforementioned principles as simple yet powerful drivers of cross-domain search performance and outperform the current state-of-the-art hyper-heuristics on both the standard CHeSC cross-domain benchmark and three real-world domains. - 11.3.2026
Mgr. Samuel Gorta
Extending molecular dynamics simulations via latent space projection
Abstract: Molecular dynamics (MD) simulations provide critical insights into the microscopic behavior of systems comprising thousands to millions of atoms. However, the high computational cost associated with these simulations severely restricts the accessible trajectory lengths, effectively limiting the range of observable physical phenomena.
Recent methodological advancements (e.g., https://doi.org/10.1039/D0SC03635H) demonstrate that the complex dynamics of such systems can be effectively approximated within a reduced-dimensional latent space. This approach is rooted in the theory of transfer operators, which model the temporal evolution of probability distributions across the system’s state space. The latent space is formally defined by the basis of the transfer operator’s eigenfunctions. By ranking these basis functions according to their corresponding eigenvalues, less significant dynamical components can be truncated, achieving substantial dimensionality reduction while preserving the essential kinetics of the system. In this framework, the embedding of a specific molecular configuration into the latent space represents its probability values across the selected basis functions.
Critically, this mapping can be modeled using machine learning techniques without the need for an explicit analytical expression of the basis. By training on relatively short input trajectories, these models can learn the latent dynamics, offering a promising path toward extending the temporal reach of MD simulations far beyond current computational limits. - 18.3.2026
RNDr. Tomáš Rebok, Ph.D.
TBA
Contact: Hana Rudová



