Objective: In recent years, the need for regulation of robots and Artificial Intelligence has become apparent in Europe. European Union needs a standardized regulation that will ensure a high level of security in robotics systems to prevent potential breaches. Therefore a new regulation should make clear that it is the responsibility of producers to identify the blind spots in these systems, exposing their flaws, or, when a vulnerability is discovered in a later stage, to update the system even if that model is not on the market anymore. This article aims at suggesting some possible revisions of the existing legal provisions in the EU.Methods: The author employed the Kestemont legal methodology, analyzing legal text, comparing them, and connecting them with technical elements regarding smart robots, resulting in the highlighting of the critical provisions to be updated.Results: This article suggests some revisions to the existing regulatory proposals: according to the author, although the AI Act and the Cyberresilience Act represent a first step towards this direction, their general principles are not sufficiently detailed to guide programmers on how to implement them in practice, and policymakers should carefully assess in what cases lifelong learning models should be allowed to the market. The author suggests that the current proposal regarding mandatory updates should be expanded, as five years are a short time frame that would not cover the risks associated with long-lasting products, such as vehicles.Scientific novelty: The author has examined the existing regulatory framework regarding AI systems and devices with digital elements, highlighted the risks of the current legal framework, and suggested possible amendments to the existing regulatory proposals.Practical significance: The article can be employed to update the existing proposals for the AI Act and the Cyber-resilience Act.