<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom"><title>frankie-tales</title><id>https://lovergine.com/feeds/tags/society.xml</id><subtitle>Tag: society</subtitle><updated>2026-02-25T15:33:03Z</updated><link href="https://lovergine.com/feeds/tags/society.xml" rel="self" /><link href="https://lovergine.com" /><entry><title>AI training, copyright and the future of contents creation</title><id>https://lovergine.com/ai-training-copyright-and-the-future-of-contents-creation.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2026-01-11T21:00:00Z</updated><link href="https://lovergine.com/ai-training-copyright-and-the-future-of-contents-creation.html" rel="alternate" /><content type="html">&lt;p&gt;I have already addressed the implications of modern LLMs, specifically their
training, in the context of copyright and licenses for both code and original
content. A 'IANAL' disclaimer applies to this post, but my honest opinion is
that such training is a legitimate type of reading and learning after study,
unless explicitly excluded in licenses among the licensee's rights.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;/images/ai-electric-sheeps.jpg&quot; alt=&quot;AI dreams of electric sheeps&quot; /&gt;&lt;/p&gt;&lt;p&gt;Following the exploitation of LLMs and the AI boom that began in 2022, several
lawsuits and litigations emerged among multiple parties, with a few reaching a
significant milestone through the first court rulings. Note that every country
has a bit different regulations about copyright and fair use, so the current
lawsuites could be only the starting point of a long list of legal actions.&lt;/p&gt;&lt;p&gt;While most of the current lawsuits seem to demonstrate that Anthropic or Meta
had the right to use books bought (in paper or digital form) for LLMs training
(on the basis of the fair use principle), the most problematic aspect instead is
the apparent use of pirated books taken from LibGen and other known piracy
websites, which - if confirmed - can result in potentially destructive damange
for the companies, to compensante authors and pay fees in the order of hundreds
of billions.&lt;/p&gt;&lt;p&gt;The same problems are present in the coding parts: again, using FOSS-licensed
code for training could fall under fair use, but training using private
codebases, as well as proprietary ones, could be equally destructive for the
same companies, as well as for GitHub and Microsoft.
The key point would be demonstrating, without any doubt, the unfair use of
private or pirated content, of course.&lt;/p&gt;&lt;p&gt;Of course, I'm quite sure future licenses for FOSS codebases and documentation
could include an explicit exclusion clause for AI training, which could
jeopardize the legitimation of use even for future FOSS code. I would expect
such a license change, as some projects already explicitly exclude AI-based
contributions. My opinion about such a question is that it could represent
shooting oneself in the foot, due to the pervasivity of AI tools among
developers currently. Adoption of AIAD could represent a boost in development
time if adopted with a healthy dose of skepticism (i.e., a human-in-the-loop
approach). About that, I'm quite convinced of Linus Torvald's point of view: the
point is not who writes the code, but who is technically responsible for it and
ensures the required quality review and supervision.&lt;/p&gt;&lt;p&gt;Moreover, an implication of the current polarization in the AI hype is the
future (present?) crisis of traditional web content providers. A symptomatic
case is the StackOverflow crisis, which will, with high probability, lead to
the end of the service as we know it in the near future.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;images/stackoverflow-graph.webp&quot; alt=&quot;The crisis of StackOverflow&quot; /&gt;&lt;/p&gt;&lt;p&gt;That will have an
impact on future AI training, too, for sure, because SO has been for years a
huge source of knowledge about multiple fields in IT. What if fewer and fewer
people will contribute to Wikipedia and general web content? What if more and
more sources of information were to reserve the right to use their information
for pure human-driven study? Knowledge has not been static in human history; AI
models will need to continuously enrich their training sets and stay up to date.&lt;/p&gt;&lt;p&gt;It would be grotesque if the whole AI hype were brought to a halt by such
copyright-based legal questions (even if I'm pretty sure a fully fair training
would be possible now for such companies, who knows the impact of a more limited
approach on the final result?). Surely, this seems the most serious threat to the
future of such companies and the whole AI-based solutions.&lt;/p&gt;&lt;p&gt;The only true solution to such a threat is finally having a true open training
model, which details sources and the whole process of training with full
transparency, something that even the so-called open AI models are still far to be
ready to provide.&lt;/p&gt;&lt;h2 id=&quot;references&quot;&gt;References&lt;/h2&gt;&lt;ol&gt;&lt;li&gt;&lt;a href=&quot;https://www.npr.org/2025/09/05/nx-s1-5529404/anthropic-settlement-authors-copyright-ai&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Anthropic settles with authors in first-of-its-kind AI copyright infringement lawsuit&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://www.anthropiccopyrightsettlement.com/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Anthropic Copyright Settlement Website&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://www.joneswalker.com/en/insights/blogs/ai-law-blog/why-anthropics-copyright-settlement-changes-the-rules-for-ai-training.html?id=102l0z0&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Why Anthropic’s Copyright Settlement Changes the Rules for AI Training&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://www.technologyreview.com/2025/07/01/1119486/ai-copyright-meta-anthropic/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;What comes next for AI copyright lawsuits?&lt;/a&gt;&lt;/li&gt;&lt;/ol&gt;</content></entry><entry><title>This was for every one: about the crisis of the web</title><id>https://lovergine.com/this-was-for-every-one-about-the-crisis-of-the-web.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2025-12-25T15:30:00Z</updated><link href="https://lovergine.com/this-was-for-every-one-about-the-crisis-of-the-web.html" rel="alternate" /><content type="html">&lt;p&gt;I just finished reading the delightful book by Sir Tim Berners-Lee, titled &lt;em&gt;This
is for Everyone&lt;/em&gt;, published this year. It is a trip, long, almost 400 pages,
about the origin and evolution of the World Wide Web, seen by those who
conceived and pushed it from the start. The entire first part of the book is
dedicated to the history of the web, the W3C, and the Web Foundation's
operations as we have known them in the first 30 years of its development, from
1989 onwards.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;/images/timbl_tife.jpg&quot; alt=&quot;This is for everyone&quot; /&gt;&lt;/p&gt;&lt;p&gt;I was there at the very beginning of the 90s: I was connected to the Internet
since 1991, and reading such a book for a good part has been an emotional trip
in my memory of those events and people. He is a visionary and an idealist who
fought for an extended period to prevent his WWW creature from being intercepted
and disrupted by for-profit interests.&lt;/p&gt;&lt;p&gt;It happened almost from the start, when first NCSA, then Netscape, and Microsoft
tried one after the other to change the whole idea of openness into something
proprietary, driven through the same scheme of embracing, extending, and
extinguishing. In practice, the complete negation of standard and openness, with
a clear goal in mind: obtaining users' lock-in into proprietary products,
clearly for profit.&lt;/p&gt;&lt;p&gt;Tim provides evidence on multiple critical aspects of the current incarnation of
the net as we know it today and over the last 20 years or more. They are both
technical and social defects or drifts. The web is no longer what we learnt to
know in its first years of existence. The start of the end of the original
web concept was the mobile-first approach, which relegated the use of a regular
computer to a second-class experience for most users. Most of the digital-native
people never used a computer to access the network, and that user experience
deeply affects the current vision of the web.&lt;/p&gt;&lt;p&gt;For years, nowadays, a browser has not been the main program for accessing
content and services. Social networks are mostly not interoperable because
companies have little interest in having their users leave the walled gardens of
their apps. Using a browser and potentially exiting the company's services to
access other servers and spaces is tolerated, but is perceived as damaging
profits. That's simply because users are not users, but customers. The result is
&lt;a href=&quot;https://lovergine.com/the-shattered-internet.html&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;the shattered Internet about which I already wrote&lt;/a&gt;: the W3C standards are still
relevant, but embedded in applications and frameworks that enrich and upset the user
experience with proprietary workflows and extensions.&lt;/p&gt;&lt;p&gt;An emblematic case is Apple, which has, in practice, abandoned its WebKit engine
and Safari browser in favor of apps and proprietary services to monetize
customers and companies.&lt;/p&gt;&lt;p&gt;The concrete risk is that the whole web and its standards would become a
marginalized component of the net, while most users are confined to walled-off
realms of proprietary services and social networks. The recent AI innovation can
mark the definitive end chapter of web content creation and search as we have
used them over the last 30 years. More and more users will limit themselves to
AI-provided overviews instead of collecting and consulting multiple sources of
information and independent services. That will also have a concrete impact on
revenues and interest in content creation and provision at large.&lt;/p&gt;&lt;p&gt;The second part of the book is fully dedicated to all such problems: the impact
of social networks, the last few years of generative AI, the BigCo dominance,
and includes all Tim's worries for the foreseeable future.  He's an idealistic,
optimistic, and positive guy due to his past experiences.  However, he also has
a good dose of sane realism. He understands that the path is nebulous and full
of dangers (specifically, the AI path is highly polarizing and can hide multiple
issues at many levels).&lt;/p&gt;&lt;p&gt;He sees in the indie web, and specifically in open and well-structured
distributed standards (such as the ActivityPub protocol), a possible way to
change the present and future by favoring interoperability and independence. A
concrete proposal is the Solid standard for personal data wallets (or pods in
Solid terminology) under complete user control for accessibility by third-party
services. Such a standard is still in its infancy, but the true problem I see is
the trustworthiness of involved parties, both companies and governments.
Trusting is the key, and maybe we all individually lost such a superpower a long time ago.&lt;/p&gt;&lt;p&gt;Creating a corpus of rules to manage all such technologies and ensure ethical
behavior can be a desperate illusion; the only concrete alternative would be to
opt out, at the cost of exclusion from the social context (not only the digital
one). But I agree there is no other way to recover the original idea of the web.
The AI technologies are even more polarizing, among doomers and boomers, with a
bumpy road ahead. For sure, open protocols and distributed multi-peer services
are the inevitable starting point, but they won't be enough.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;&amp;quot;It was not enough simply to release new technology and hope for the world to
improve. You had to develop technology and society together. You really had to
fight, in a principled and continuous way, for human rights. The web offered
people a platform for their voices to be heard, reducing the cost of publishing
and distributing information to effectively nothing. But, used improperly, it
could also be turned into a tool of surveillance and control.&amp;quot;  (timbl)&lt;/p&gt;&lt;/blockquote&gt;&lt;h2 id=&quot;references&quot;&gt;References&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://us.macmillan.com/books/9780374612467/thisisforeveryone/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Tim Berners-Leei, &lt;em&gt;This is for everyone:The unfinished story of the World Wide Web&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://www.edelman.com/sites/g/files/aatuss191/files/2025-01/2025%20Edelman%20Trust%20Barometer_U.S.%20Report.pdf&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;2025 Edelman Trust Barometer: Trust and the Crisis of Grievance&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://solidproject.org/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;The Solid project&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;</content></entry><entry><title>Still, no silver bullet</title><id>https://lovergine.com/still-no-silver-bullet.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2025-08-24T18:30:00Z</updated><link href="https://lovergine.com/still-no-silver-bullet.html" rel="alternate" /><content type="html">&lt;p&gt;I recently re-read the seminal book by Fred Brooks about software engineering,
entitled &amp;quot;The Mythical Man-Month&amp;quot; or MM-M for brevity. Specifically, I read the
paper version of the 20th anniversary, which was revised and reprinted in 1995,
after the first edition of 1975. I did that on purpose, firstly because it is
always a fantastic read, and secondly to understand how much of its contents is
still valid today, exactly thirty years later since its last revision.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;/images/mm-m.png&quot; alt=&quot;The Mythical Man-Month&quot; /&gt;&lt;/p&gt;&lt;p&gt;Fred missed in 2022; otherwise, it would still be interesting to know his thinking
nowadays after the LLMs boom and the birth of the AIAD (AI Aided Development) as
a new revolutionary (or often seen as such) tool. Hi, Fred, wherever you are.
It is worth mentioning that AI was already taken into consideration by Brooks at
the time, even if limited to expert systems and other rule-based variants, which
seemed promising and often sold as revolutionary before the mid-90s.  A lot of
the book's contents remained in the history of software engineering, including
the famous &lt;a href=&quot;https://en.wikipedia.org/wiki/Brooks%27s_law&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Brooks' Law&lt;/a&gt;, and the
whole book is still an excellent source of inspiration for any management and
organization of complex intellectual projects (not necessarily limited to
software systems), that heavily includes large teams of individuals.&lt;/p&gt;&lt;p&gt;One of the main theses of the latest book edition is that in the short term of
10 years from its proposition (the original essay was dated 10 years after the
first edition of the book), he did not expect a &lt;a href=&quot;https://en.wikipedia.org/wiki/No_Silver_Bullet&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;&lt;em&gt;silver bullet&lt;/em&gt;&lt;/a&gt;.
That means no significant technological or managerial development was expected to be able to
improve our productivity in programming by one order of magnitude. Ten years
later, he confirmed the same idea, even considering exceptional tools like
old-generation AI, visual programming, CASE tools, and so on.
Is this thesis contradicted in 2025 by the existence of current AIAD tools,
including chatbots, agents, and AI-empowered IDEs? My honest idea is no. I mean
not now and not for the foreseeable future. The reason is exactly the same as
Fred posed at the time. Reducing &lt;em&gt;accidental&lt;/em&gt; problems in software creation
(what AI is able to do) cannot be confused with the &lt;em&gt;essential&lt;/em&gt; problems in
software creation: the complexity of defining an articulated task, its
analytical specifications, and an algorithmic solution to solve it.  First of
all, ignore the simplistic case of asking an AI engine to implement a very
&lt;em&gt;simple&lt;/em&gt; program. Here, the word simple means truly that. If you can specify a
request by a whatever large manageable token context and formulate your request
in terms of a brief question (let me suppose a question of some dozens of rows),
well, that's probably an example of a simple (or dumb) problem. Too few, too
easy. We are talking about a whole system that is generally difficult to
describe, even in thousands of pages of specifications and documentation,
written collectively by large teams of developers, architects, and domain
experts.&lt;/p&gt;&lt;p&gt;The hard truth is that most of the real-world information systems out there
cannot simply be specified in such a way. We are not able to define an
unambiguous and complete enough specification to describe such systems, not to
mention being truly able to write a complete and neat documentation of it,
including its inner workings and use. We live in a deep illusion about that. The
context size to get the required level of details to avoid bugs and ambiguities
in specification would be impractical for current and even future tools, as well
as for any humans. We would get in any case buggy (i.e., incomplete or
misunderstood) results even if the AI engine were able to avoid hallucinations
(which is not the case) and had no limitations for context size. The presence of
AI hallucinations is only accidental in this regard.&lt;/p&gt;&lt;p&gt;In the current AI tooling, we are simply moving the complexity from the writing
of a formal language step by step to using a natural language with a higher
level of abstraction to express the problem. The complexity is still there, and
it is inherent to the problem. Again, we resolved an accidental difficulty in a
creative manner, not different from moving from assembly to using a modern
programming language. Now the difficulty has moved elsewhere, but it is still
there, and natural language is even more complicated to use compared to formal
language. These difficulties translate into multiple refinements and trials to
try to be more precise and get sensible answers and code in a continuous
iteration. Isn't that so similar to the whole ordinary process of developing a
program? In the most simplistic approach, such a process becomes &lt;em&gt;vibe coding&lt;/em&gt;,
and iteration could tend to infinity, with a forever loop. The smarter
programmer for an easy task will do that in a reasonable (limited, hopefully)
number of iterations, instead.  Is that a significant improvement of one order
of magnitude? I think not, as in most cases for the past. As in the case of
high-level languages instead of assembly, they improved efficiency in coding as
asserted in MM-M, but not by a whole order of magnitude. The AIAD is again
another helper to solve accidental difficulties. The problem and all its
complexity are still there. Thinking that we found the silver bullet is again
(and again) an illusion or pure marketing.&lt;/p&gt;&lt;p&gt;So why do many CEOs insist on predicting a bright yet unlikely future of AI
agents instead of having developers create applications? Brooks already wrote
about that: there is a profound confusion in exchanging months and men, and an
excess in optimism when approaching software development, even among techies,
but that becomes paroxysmal among managers. None can seriously provide even a
decent and reasonable evaluation of development time starting from incomplete,
ambiguous, or vague specifications: the same simply happens systematically in
overestimating the capabilities of current AI tools.&lt;/p&gt;&lt;p&gt;So what? AIAD is simply yet another tool among those available to developers,
but the management problem of dominating complex projects is still there, with
all its inherent difficulties. And the possibility to use natural language
instead of a high-level formal one is only an apparent simplification of the
process. It looks more familiar and easier, but it is also much more
ambiguous, and the so-called &lt;em&gt;prompt engineering&lt;/em&gt; is again a pure optimistic
illusion, an euristhic approach to try to overcome our totally insufficient
capabilities of dominating nuances and semantics.&lt;/p&gt;</content></entry><entry><title>Breaking dependencies on BigCos and a US centric IT world</title><id>https://lovergine.com/breaking-dependencies-on-bigcos-and-a-us-centric-it-world.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2025-05-17T15:00:00Z</updated><link href="https://lovergine.com/breaking-dependencies-on-bigcos-and-a-us-centric-it-world.html" rel="alternate" /><content type="html">&lt;p&gt;I recently read some interesting articles (see [1,2]) by Bert Hubert about
IaaS and SaaS in the EU, which are generally considered cloud computing at
large. He has quite a deep understanding of such topics, and the reading is
enjoyable and triggered a few reflections.&lt;/p&gt;&lt;p&gt;The problem could be analyzed as a vast version of the &lt;a href=&quot;https://en.wikipedia.org/wiki/IndieWeb&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;indie web&lt;/a&gt;
movement.
Ensuring independence from a handful of big companies, all geolocated out of the
continent and possibly subject to the inconstancies of a humoral government, as
in current times, is a duty not only for individuals (who should also protect
themselves locally), but also for whole countries.&lt;/p&gt;&lt;p&gt;First of all, it is evident enough that Europe is not that bad about
infrastructure. In multiple countries on this continent, there are quite a few
big companies that have nothing to envy, such as Amazon, Google, or Microsoft,
for their capabilities and critical mass. There are already companies with
multi-region data centres, a good level of automation, and high SLAs. Of course,
they are mid-range companies, not monsters with country-size balances.&lt;/p&gt;&lt;p&gt;It is already perfectly possible [3] to depend on EU-based standard capabilities,
including email services or file storage, which represent a primary part of
common cloud services. What is truly missing is a capable enough set of
web-based cloud personal productivity software based in Europe, which would be
comparable to  Google Workspace or Microsoft 365, including video conferencing
and instant messaging.&lt;/p&gt;&lt;p&gt;Other important services could be Youtube equivalent, but what it is evident for
me is that such kind of services are also available at small scale, what it is
really missing is a sizeable financial effort to fund consistently projects that
already exist in the FOSS/indie web ecosystem and the will of doing that instead
of paying new consortia to develop from scratch some new solutions. Europe is
full of brilliant people and companies that are just waiting for that.&lt;/p&gt;&lt;p&gt;As a personal example, by manual, at work, the central management recently
abandoned the whole idea of maintaining fully &lt;em&gt;on-premise&lt;/em&gt; systems to move some
key services to the cloud for the entire national research network. Our
non-specialistic needs were quite average profile: shared storage, email system,
the usual personal productivity tools like Microsoft Office, and a
teleconferencing/webinar system. The same could be enough to cover the digital
needs of most of the companies and bodies out there in Italy. The result has
been a national contract with Microsoft for MS365 and a more limited contract
with Cytrix for GotoMeeting/Webinar. IMHO nothing is transcendental, something
that could be implemented with a decent pumping of money to scale up a
multi-datacenter on-premise solution to have full redundancy and an equally
decent dedicated team, with many more possible features and capabilities
available, at the end of the day. Maybe the only actual key point was the
availability of MS Office, which is the only severe lock-in source for most
users (and also why Google Workspace here has no hope of being considered). But
even in the past, we paid a lot of money for desktop multi-licenses in any case,
without any additional cloud solution.&lt;/p&gt;&lt;p&gt;I will not deal deeply with these incomprehensible dependencies on a single
application: in my honest opinion, a lot of users are simply too lazy to refuse
such a kind of lock-in, even if they depend on a very limited number of features
of such an application. There are at least three different alternative desktop
programs that most users could use successfully, but we still see Microsoft
Office as the holy grail (and I would also add that its web version has an
embarrassing UX).&lt;/p&gt;&lt;p&gt;Anyway, so what? That has been a precise abdication to autonomy of digital
services and choices, whose consequence will be for sure visible in the future:
loose of internal tech skills, missing investiments in FOSS alternatives, lack
of human resources growing, and missed diversifications of solution providers
(now both located in USA, not even Europe).&lt;/p&gt;&lt;p&gt;Seriously? We did not even have to externalize such services, only to reasonably
invest in the right direction for additional human resources and infrastructure
improvement instead of paying fees. In the last thirty years, I have not even
seen a cent directly paid by my organization for FOSS projects used daily by
tons of us, except for some rare fees for conference participation and sometimes
indirect payments for people that incidentally worked on FOSS project during
their daily job.&lt;/p&gt;&lt;p&gt;Let me be quite pessimistic about the actual intention of European bodies to
find a concrete way of improving continental clouds, which could be perfectly
viable instead. Until this moment, I have seen only an exaggerated capability of
defining rules for the IT ecosystems that are not always sensed and also often
misapplied. I saw instead a great lobbying capability by the well-known BigCos
that have sustained their own interests and de-potentiated any past effort about
this subject (hey, Gaia-X, yes, I'm talking about you).&lt;/p&gt;&lt;p&gt;Is this a lost war? I don't know, but I still don't see concrete signals of
changes in European policies about digital innovation, except for a big
regulation effort that does not change our full dependency on a handful of US
companies. I have more hope in individual actions, but of course, as in the case
of FOSS, they require a high level of awareness, which I still see in a limited
measure: just compare the number of Mastodon accounts against Meta socials,
TikTok, or Twitter/X ones. Maybe we are now at the same level of FOSS as about 30
years ago: a few visionaries and geeks see the problem and act, and most
people will follow. Or at least, I hope so.&lt;/p&gt;&lt;h2 id=&quot;references&quot;&gt;References&lt;/h2&gt;&lt;ol&gt;&lt;li&gt;&lt;a href=&quot;https://berthub.eu/articles/posts/now-how-to-get-that-european-cloud/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;But how to get to that European cloud?&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://berthub.eu/articles/posts/the-european-cloud-ladder/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;The (European) cloud ladder: from virtual server to MS 365&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://european-alternatives.eu/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt; European alternatives for digital products&lt;/a&gt;&lt;/li&gt;&lt;/ol&gt;</content></entry><entry><title>Digital communities, toxicity and other drifts</title><id>https://lovergine.com/digital-communities-toxicity-and-other-drifts.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2025-02-08T17:56:00Z</updated><link href="https://lovergine.com/digital-communities-toxicity-and-other-drifts.html" rel="alternate" /><content type="html">&lt;p&gt;In a &lt;a href=&quot;https://lovergine.com/socials-they-are-not-your-home.html&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;previous post&lt;/a&gt;,
I suggested that people should escape from big company-based
social networks and find refuge in the Fediverse. The reason for that is simply
to avoid being constantly considered a profitable customer, being profiled, and
continuously bombed from advertising campaigns or sponsored posts. In brief, the
purpose is returning to the original spirit of the big network of peers of the
90s.&lt;/p&gt;&lt;p&gt;Reading here and there on the Fediverse, I see that many people abandoned the
well-known prime-time socials because of the diffused toxicity of such
environments, censorship, and other bad personal experiences in their use.
Unfortunately, the past examples of newsgroups and IRC channels showed that even
Mastodon profiles or Matrix stanzas are not an answer for such drifts of digital
life. The new Fediverse simply changed the user experience of communication
services, not the core social experience of them.&lt;/p&gt;&lt;p&gt;Nothing is truly different between modern Fediverse services and groups,
channels, or mailing lists of the old good days. Like it or not, we had trolls,
spammers, and other strange beasts even in the 90s - when the men were men, to
use an abused clause. Of course, the smaller the communities, the fewer the
social problems of interaction among individuals in the digital world, and
that's a fact of life. Therefore, a microblogging platform such as Mastodon
could seem a great place to live digitally happy. IMHO, it is only a matter of
time and critical mass (number of active accounts) before experiencing the same
type of dynamics, unfortunately.&lt;/p&gt;&lt;p&gt;Of course, we had solutions to mitigate the problems that have been always the
same in the last 30 years or more:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Defining netiquette and conduct codes.&lt;/li&gt;&lt;li&gt;Introducing community-driven pools of moderators with privileges to kick off
pernicious individuals based on their behaviors and repeated violations of the
conduct codes.&lt;/li&gt;&lt;li&gt;Never feeding the trolls.&lt;/li&gt;&lt;li&gt;Having a reasonable number of good bots to manage spammers and other harmful agents automatically.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Note that anonymity has never been a problem, so identifying users is not a solution and
should not be considered in the equation. It does not work as a definitive
deterrent, and those proposing such a practice are probably in bad faith.&lt;/p&gt;&lt;p&gt;That said, there is also a profound difference between a self-governed community
with a set of rules chosen and defined by the community itself and censorship
imposed by a company, a government, or any other external entities. This would
be an excellent opportunity for the Fediverse to conjugate a non-toxic
environment for users with a sane distributed network far from profit-only
logic. It only needs to be caught by Fediverse communities.&lt;/p&gt;</content></entry><entry><title>Socials, they are not your home.</title><id>https://lovergine.com/socials-they-are-not-your-home.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2025-01-20T17:00:00Z</updated><link href="https://lovergine.com/socials-they-are-not-your-home.html" rel="alternate" /><content type="html">&lt;p&gt;Recently, I participated in a brief thread on Mastodon about how to maintain
relations with people that have been built around a social network, specifically
through Facebook. This is not different for Instagram, X/Twitter, TikTok or
whatever  you prefer.&lt;/p&gt;&lt;p&gt;I explained that this is simply not possible because, in my experience, people
use multiple socials based on their age, technical experience, personal taste,
and even the presence of other peers in any of the existing social systems.&lt;/p&gt;&lt;p&gt;This post explains in long form what I think about socials and, in brief, why
you should not base your relationship life on any of them, including any
serious business, of course.&lt;/p&gt;&lt;h2 id=&quot;large-social-networks-are-an-expensive-game&quot;&gt;Large social networks are an expensive game&lt;/h2&gt;&lt;p&gt;In the last 30 or more years, a series of social networks appeared on the
Internet with different characteristics, and all of them are (or have been until
their closure) typically usable for free (as beer). Of course, they are pretty
expensive toys,  requiring a lot of servers, data centres, worldwide networks,
software and human resources. As said briefly with a joke, if they are not
selling you anything, you are the product.&lt;/p&gt;&lt;p&gt;The game is clear: they are profiling your personal data and preferences to
create selective advertisement campaigns (included in the social network feed)
if you are not paying a premium charge. That could happen even if you pay for a
subscription, of course, but it is less disturbing, at least.  Moreover, those
networks also have been subject to data leaking from time to time, because of
security issues or intentionally, and probably you would not appreciate that
your personal data, telephone number, and the photo of your children/cats, or
the photo of you drunk and naked in a party ten years ago walk publicly on the
net.&lt;/p&gt;&lt;p&gt;If this is something you are not available to accept, avoid any not-so-free
social network, plain and clean. Of course, any of your peers (family, friends,
colleagues, any other) could have a different opinion about that. Any of them
could choose one of the many social networks out there, and those networks
leverage specifically the FOMO syndrome to collect and increase users/customers.
If any of your friends are on Facebook or Instagram, you could be captured in
the network easily because of that mass syndrome. The only way to escape is
simply not playing that game.&lt;/p&gt;&lt;h2 id=&quot;the-distributed-small-social-networks-can-be-a-solution&quot;&gt;The distributed small social networks can be a solution&lt;/h2&gt;&lt;p&gt;A viable alternative is participating in modern independent networks of small
community-based services, such as Mastodon for microblogging or Matrix for
one-to-one communications and groups.  Of course, they need to be sustained for
cost coverage, and any instance can always disappear at any time because the
admin(s) move to other interests or costs exceed what the admin(s) are available
to pay at the end of the day. They are probably not long-term solutions, but one
can always move from one instance to another when things turn down. Simply, do
not rely too much on them. And probably they will not solve the problem of
the leaking of photos of you naked and drunk in a party of ten years ago: shit happens
even on those systems.&lt;/p&gt;&lt;p&gt;If your major interest is ensuring privacy and security, all the protocols and
software of those distributed networks are purely FOSS, and you can always
create your own instance. Of course, again, this is not something free of cost;
you always have to consider computing resources (both cores and storage) and
your time. This is not viable for people who lack the required skills and
knowledge, but potentially, it is the only indisputable solution.&lt;/p&gt;&lt;p&gt;Unfortunately, after creating your own Mastodon/Matrix instance or moving to a
nice maintained instance by a trustworthy party, you still have to convince your
peers to participate in such a network, and that is the tricky part. The hard
truth is that most people do not care about abdicating their freedom and
participating in a social network as a customer. That is until the whole thing
becomes a PITA because of censorship rules, excess of advertisements,
moderation, and lack of active peers. That happened in the past for X/Twitter
and will happen again for other social networks. It is a matter of critical
mass: if the network is not large enough, you can miss a lot of your peers, and
you can do exactly nothing to solve this problem.&lt;/p&gt;&lt;p&gt;That is the same problem you find in IM systems. For historical reasons, most of
your peers have probably registered a Whatsapp account, much less a Telegram
one. There are even fewer with a Discord account (not that better) or something
less invasive and respectful of privacy, such as a Signal or Matrix account. The
only valid reason is that WhatsApp started in 2009, Telegram in 2013, and Signal
in 2014 and all of them, in one way or another, tried to maintain a user's
lock-in or managed to solve scalability and add features since then.&lt;/p&gt;&lt;p&gt;One should ask why they all avoided systematically extending the existing XMPP
protocol soon instead of re-inventing the wheel. I'm a bad guy and think that
standardizing does not solve the problem of sustainability for such systems.
They need to lock in the users to survive.&lt;/p&gt;&lt;p&gt;Think if you had to use a non-standard SMTP protocol to send emails in the 90s.
That would have been an epic failure for the Internet tech community and users.
Curiously, this is not true on the current &lt;a href=&quot;https://lovergine.com/the-shattered-internet.html&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;shattered Internet&lt;/a&gt;, and this is the
reason for the current social nightmare.&lt;/p&gt;&lt;p&gt;It's not too late: let such non-distributed and proprietary networks die as soon
as possible; simply close your accounts or freeze them and move to something
more standard, distributed and human-sized. Don't care about missing out on
someone; they will come sooner or later if they share the same ideas, so why
care? And if they do NOT share the same ideas, why care?&lt;/p&gt;&lt;p&gt;Come on, let's free ourselves of any social dependency on closed and proprietary
networks.&lt;/p&gt;&lt;p&gt;And possibly, if you absolutely have to go around naked and drunk in the next
party, at least avoid that people make photos of you and publish them.&lt;/p&gt;</content></entry><entry><title>Refurbish to fight against planned obsolescence</title><id>https://lovergine.com/refurbish-to-fight-against-planned-obsolescence.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2024-09-25T18:30:00Z</updated><link href="https://lovergine.com/refurbish-to-fight-against-planned-obsolescence.html" rel="alternate" /><content type="html">&lt;p&gt;The planned obsolescence of computers and other IT electronic equipment is a
well-known plague of our age. For years, I stopped buying new computers and
prefer refurbished ones whenever possible. That includes all my personal ICT
boxes, and even at work, I try to spin out the life cycle of the equipment in
use under my direct management.  Proprietary OSes often limit the lifespan of IT
equipment, but in some cases, vendor-independent FOSS software can replace the
original one at End of Life (EOL). This is beneficial because FOSS software is
often more lightweight, customizable, and has a longer support life, thereby
extending the usability of your equipment.&lt;/p&gt;&lt;p&gt;For instance, I'm writing these notes on a dual-processor Lenovo Thinkstation
C30 refurbished workstation that left the factory 10 years ago with plenty of
RAM I bought 4 years ago. Thanks to the use of SSD storage instead of the old
HDDs, I hope to use it for at least another 4-5 years. My living room Thinkpad
L540 refurbished laptop is the same age, again with a replaced SSD, and I bought
it in the same period.  My main office personal HPZ320 workstation has been
around for almost the same number of years, while my office laptop is a damn new
Thinkpad X1 Carbon 5th-Gen of 2017. Both of them were new at buying time.&lt;/p&gt;&lt;p&gt;Of course, all of them run Debian GNU/Linux and have been upgraded regularly
during the years. This is essential. Otherwise, the well-known proprietary OSes
will obsolete your boxes, and you will be screwed.  Generally, this is an easy
task because reasonably aged computers are better supported by FOSS kernels. It
would be better to avoid well-known vendors with problematic or unsupported
devices.&lt;/p&gt;&lt;p&gt;I have even run a couple of aged Seagate Personal NASes in EOL for years, thanks
to installing a plain Debian for ARMv7 processors distribution instead of the
original Seagate one. I think they have been around for almost ten years, and
one is a used unit. Of course, HDDs changed in the meantime.&lt;/p&gt;&lt;p&gt;Some servers still in use are the rock-solid IBM xSeries.They were born with a
Suse Linux ES 11 and are still around and doing their jobs under Debian. Of
course, the trade-off between power consumption, budget, and goals must be
rigorously considered.&lt;/p&gt;&lt;h2 id=&quot;plan-for-the-future-in-advance&quot;&gt;Plan for the future in advance&lt;/h2&gt;&lt;p&gt;In order to run computers for a long time, I found it essential to buy with more
RAM than needed at the moment of buying (even in the case of new boxes). You
will need more, and assume you still don't know. Possibly, the same is true for
the number of cores. If you buy a refurbished computer, it is inevitably a
professional/business unit you would never find in a shop for home users. That's
better because they generally have superior quality and decent support in the
Linux kernel.  In the business market, note that many companies retain computers
for an extended guaranteed time only (by internal policy), so you will always
find 3-5 years-old equipment recycled by medium/big companies. For personal use,
they are more than enough and exceptionally usable with a FOSS OS.&lt;/p&gt;&lt;p&gt;Even note that current computers (but for GPUs) are comparable to reasonably old
computers (let's say the last 10-12 years) in terms of the number of cores,
speed of RAM, and bus throughput. They are faster, but not so much. Their
average power consumption should be lower, but Moore's law is dead, sorry. So,
buying new computers is quite pointless in most cases, and even worse if you
purchase a consumer low-end computer for a limited budget. A refurbished box is
generally the best option because it is cleaned, incorporates ad hoc changes
(e.g., a new SSD for old units), is tested, and includes a one-year warranty by
its vendor (generally, but just check before buying). Even the refurbished
computer was at least an SOHO one, not a home product, so it is a much better
option.&lt;/p&gt;&lt;h2 id=&quot;be-redundant-be-safe&quot;&gt;Be redundant, be safe&lt;/h2&gt;&lt;p&gt;I never base my daily activities on a single box. At home and at work, I always
have forklifts and replacements for emergencies. You cannot trust old (or even
new) units, and not for sure in the long term. The same goes for parts such as
HDDs or SSDs. &lt;em&gt;Ça va sans dire&lt;/em&gt; that you need to take care of your data and
backup with required redundancy, too.&lt;/p&gt;&lt;h2 id=&quot;missions-impossible&quot;&gt;Missions impossible&lt;/h2&gt;&lt;p&gt;As said, the refurbishing possibility is minimal for GPUs and in some application
domains. I'm not a gamer, but there are rumors about forced obsolescence every
few years for commercial reasons triggered by games and vendors. Even for
high-performance computing, any GPU before late 2014 is not more truly
&lt;a href=&quot;https://en.wikipedia.org/wiki/CUDA#GPUs_supported&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;supported or usable&lt;/a&gt;. Again,
it is partially a matter of hardware, but proprietary software, including SDKs,
renders old GPUs obsolete.&lt;/p&gt;&lt;p&gt;Nowadays, the 2016 generations of GPUs with 8GB of VRAM are basically to be
retired for many applications: that was the standard size at the time, but now
vendors have decided that 8GBs or less is too few for any practical use. Too
bad.&lt;/p&gt;&lt;p&gt;Smartphones are another area where planned obsolescence is the rule. The most
expensive ones have 7 years of software support, but most can only be upgraded
to a pair of major versions. Basically, you should change your phone every 3-5
years because security support is limited in time. But the average replacement
cycle seems less than 3 years, anyway. Often, an older phone can still be used,
but it is at risk for security support. Too bad, again.&lt;/p&gt;&lt;p&gt;The same happens for many so-called smart electronic equipment: having a box
permanently connected to the network and missing any security support is
generally a bad idea, and only a small minority of them can be upgraded to FOSS
firmware. Cameras, TVs, routers, access points, and many other objects in our
homes have a sort of implicit counter for end-of-life that often stops before
the end of the hardware parts. You need to be very selective in equipment
vendors.&lt;/p&gt;&lt;p&gt;In many cases - guess what? - such devices now run an embedded version of
GNU/Linux that could be community-supported after the end of vendor support if
the vendor provided enough information appropriately, which is rarely a
possibility. In most cases, the vendor only has a public archive of third-party
FOSS source packages, not the entire build system, so community support is not
practically viable. Even any development documentation is usually totally
missing, just to discourage people from following that path. In many cases, the
devices include totally proprietary chipsets and drivers, so there is no way.
That's even the reason why Android is an almost-open mobile operating system.
Too bad, take three.&lt;/p&gt;&lt;h2 id=&quot;will-it-change-in-the-future&quot;&gt;Will it change in the future?&lt;/h2&gt;&lt;p&gt;In general, this battle was lost years ago. Expanding or repairing a laptop
could now be challenging due to the accurate practices of many vendors to
discourage self-intervention. That's true for a lot of other equipment, too.
They are always ready to obsolete your products more often than you are
prepared. I'm a pessimist about the future of this war.&lt;/p&gt;&lt;p&gt;&lt;em&gt;They're coming outta the goddamn walls&lt;/em&gt;&lt;/p&gt;</content></entry></feed>