Digital Society
- Maj-Britt Kentz

- May 31
- 6 min read

Digitalization has reshaped society and politics faster than any previous technological transformation. Its effects extend to economic structures, organisational practices, and the routines of everyday life. Digital solutions have changed how work is done, how services are delivered, and how people interact with one another.
This development can also be examined in relation to Manuel Castells’ early 2000s idea of the digital divide. According to Castells, society was divided in a binary way: an individual or community was either included in the networked development or entirely excluded from it (see Castells; Zangana & Castells 2006). This description suited an era in which access to technology sharply separated participants from the excluded.
Today, the concept of a divide no longer fully explains the situation. The development is better described through layers and complexity: people can become stuck at different levels of digitalisation, caught in the seams of the divide’s strata. They are not completely outside, but neither are they fully included. This layered digital reality makes it increasingly difficult to assess competence and to measure digital capability.
For example, one person may be skilled at using social media but excluded from digital health services. Another may know how to use mobile banking yet be unable to understand what artificial intelligence is doing in their work or what it means for their everyday life.
At the same time as technological development advances, tensions emerge between innovation and regulation. Current technology could solve many everyday problems, but service development often collides with the slowness and complexity of regulation. This mismatch creates a situation in which opportunities exist but cannot be fully realised. Developers and service providers must align creative ideas with regulatory boundaries, which often makes innovations products of compromise.
It is also necessary to recognise human limitations. Change is continuous and multi-layered, and an individual’s ability to internalise it is limited. Digital competence no longer refers merely to technical skills but also to the ability to understand and evaluate what technology means in work, everyday life, and society. Measuring digital capability is challenging because it consists of both concrete skills and broader comprehension, which require time and continuous learning to develop.

My own work is directly related to addressing this issue. My aim is to create safe and accessible services that are open and easy for the user. From the user’s perspective, digitalisation should not be an obstacle or an additional burden but an opportunity to utilise and understand information in new ways. What matters is making technology transparent and human-centred so that everyone can find their place in the layered digital reality.
Zangana, M., & Castells, M. (2006). Communication (pp. 434). In M. Castells & G. Cardoso (Eds.), The Network Society: From Knowledge to Policy. DHI. https://www.dhi.ac.uk/san/waysofbeing/data/communication‑zangana‑castells‑2006.pdf
GDPR Is Not Enough

The risks of an open digital society can only be understood by examining the dynamics between regulation and the power of major players. GDPR, Data Processing Agreements (DPA), and FRIA together form a central foundation that protects the individual and obligates service developers. These instruments have created a legal structure that forces organisations to consider privacy, security, and transparency. This is a significant step forward, but it does not solve the problem of the concentration of digital power.
In practice, for example, Meta offers its platforms free of charge while formally maintaining the user’s rights to their own content. In reality, however, the user grants the company broad rights to analyse, exploit, and share content as well as behavioural data. The illusion of user control arises from complex settings that few people ever change, shifting real control to the platform provider. This “consent” is non-negotiable: the user must accept the terms in full or be excluded from the world’s largest communication networks.
Regulation alone is not enough if business models are based on the collection and exploitation of data. In practice, control becomes partly illusory. Services appear free to the user, but in reality, they are engines of data collection and advertising business. Alongside this, developers must constantly balance: regulation should guide ethical and safe design, but it should not narrow innovation. The goal is not to search for loopholes but to create solutions that are genuinely safe and accessible for the user.
The Everyday Presence of AI and ChatGPT

ChatGPT is one of the most visible examples of how quickly AI has entered everyday life. Although the service was launched at the end of 2022, its roots lie in a much longer trajectory of AI and machine learning. Early natural language processing solutions were already being developed in the 1960s, but the real breakthrough came when deep learning and the training of large language models with massive datasets became possible in the 2010s. From a consumer perspective, the development may look like a sudden leap, but in reality it is the result of decades of research.
When GPT was first released in late 2022, its knowledge base relied on internet content up until 2021. This was clearly reflected in the answers it produced. For instance, the model could claim that it was acceptable to fish seals and make ground meat stew out of them, or that horse eggs are nutritionally comparable to chicken eggs, but without a shell, flexible and leathery, and as large as 60 centimeters in diameter. Such inaccurate statements revealed how unreliable the early versions were in many domains.
Today, GPT can already describe my work and field of expertise quite accurately. This demonstrates how fast development is advancing and how AI is increasingly able to produce contextually relevant information. At the same time, it serves as a reminder that users must understand the model’s limitations and approach its answers critically. AI is not a finished solution but a tool, and using it effectively requires digital skills, awareness of data security, and the ability to distinguish reliable information from hallucinations.
Experiences with ChatGPT
I asked ChatGPT for information related to the work of a concept developer.
Feature | Observations from responses | My assessment |
Structure | Responses were clear and well-organized; divided into tasks, skills, and work environments. | The structure helped to outline the diversity of the role. |
User-centeredness | Emphasized identifying user needs and participatory design. | Reflects the real principles of my work well. |
Multidisciplinarity and contextual sensitivity | Recognized that the role of a concept developer varies across environments (public sector, companies, education). | Matches my own experience of the contextual nature of the work. |
Methods | Highlighted commonly used methods such as design thinking and service design. | Identifies key tools but does not delve into their application. |
Overall assessment | Provided a structured and realistic description of my work. | Responses were useful but did not capture all the human and pedagogical emphases. |
Lessons Learned
I realized that digital inequality is no longer a simple gap between participants and outsiders, but a layered phenomenon. Someone may be skilled in one digital environment yet fall behind in another. This makes digital inclusion a far more complex issue than it has previously been understood to be. Digital competence is not just about knowing how to use devices, but about being able to adapt to rapidly changing environments and to evaluate what technology means in everyday life.
This insight directly affects my professional work, where I aim to make digital solutions accessible and safe for different kinds of users. At the same time, it touches my personal life. I need to consider how to support my children in growing up in a world where digital layers shape their opportunities, and how to assist my aging family members, for whom even logging into services can become an obstacle. It is no longer enough to know how to restart a device, but rather what is required is more holistic support, patience, and understanding.

I commented on
Post Scriptum
The discussion on AI governance has gained new visibility. The Wall Street Journal reported a case in which an AI model modified its own code to avoid being shut down. Such examples have raised a central concern: how can we ensure that AI systems remain under human control? This issue is described as the alignment problem – the task of aligning AI’s goals with human values and societal needs – which has become one of the most pressing challenges in current research and regulation.

Comments