<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom"><title>frankie-tales</title><id>https://lovergine.com/feeds/tags/technology.xml</id><subtitle>Tag: technology</subtitle><updated>2026-02-25T15:33:03Z</updated><link href="https://lovergine.com/feeds/tags/technology.xml" rel="self" /><link href="https://lovergine.com" /><entry><title>Is AI driven coding the start of the end of mainstream FOSS?</title><id>https://lovergine.com/is-ai-driven-coding-the-start-of-the-end-of-mainstream-foss.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2026-02-04T20:00:00Z</updated><link href="https://lovergine.com/is-ai-driven-coding-the-start-of-the-end-of-mainstream-foss.html" rel="alternate" /><content type="html">&lt;p&gt;Someone on Mastodon (I’m sorry, but I don’t remember who exactly) published a
short post that pointed to a rather technical economic study of the impact of AI
on FOSS software development [1].
It is no secret that the AI debate is highly polarized, and the enthusiasts for
the current trend in AI applications in the IT domain are at least as numerous
as those who are concerned/skeptical. What is certain is that no one can, in the
long term, prospectively evaluate the impact of AI on society, particularly in
the IT world.&lt;/p&gt;&lt;p&gt;The main thesis of the paper is that AI-based code production will end the
mainstreaming of FOSS software, as we have learnt over the last 15-20 years. The
paper begins with well-known episodes from recent history (specifically, the
Tailwind saga [2] and Stack Overflow's near-death experience [3]).&lt;/p&gt;&lt;p&gt;Of course, the paper presents a theoretical economic model to evaluate a
possible impact scenario for the FOSS production model, which could or could not
come to fruition, depending on the assumptions made.&lt;/p&gt;&lt;p&gt;My honest opinion is that a conscious and accurate use of AI can accelerate
development. That is, in a bad and good sense, I mean directly on the basis of
the experience and skills of people who use such models. Therefore, we are both
seeing slop and high-profile creations with the aid of AI. Maybe slop contributions
are more prevalent simply because mediocre developers are the majority, and
mediocrity is the backbone of enterprise production (because it is the most
replicable and independent of contributors and their capacities).&lt;/p&gt;&lt;p&gt;Like it or not, modern software industries do not need, and fight against, too
much creative approaches. Enterprises need &lt;em&gt;aurea mediocritas&lt;/em&gt;, not isolated
geniuses. Also, depending on third-party creations, it apparently reduces the
enterprise's technical debt because it is typically shifted onto someone else's
shoulders. Of course, this is an approach that works until it fails miserably when such
a third party disappears, changes its license model, changes its mind about the
product, changes its APIs, and so on.&lt;/p&gt;&lt;p&gt;That said, one clear consequence of using AI helpers in coding appears to be the
progressive disappearance of many packages, modules, and libraries, which can be
easily replaced by AI-generated creations tailored to the task. Just to cite one
practical example, Tailwind nowadays could be easily replaced by CSS and simple
JavaScript components, with the obvious advantage of not depending on yet
another third-party-controlled piece of code that could be subject to abrupt
changes from one version to another without notice and break existing codebases.
At the same time, Tailwind themes can be generated by AI without even consulting
its documentation (which apparently had an immediate impact on the company's revenues).&lt;/p&gt;&lt;p&gt;Another advantage is that AI-based, tailored solutions would reduce the amount
of code from external dependencies that solve problems for others, instead of
focusing on the minimal set of features for your own needs (with all the
implications of possible breakages arising from such an anti-minimalistic
pattern).&lt;/p&gt;&lt;p&gt;Of course, using AI helpers in this way does not reduce the effort required to
understand and create new software, but it probably raises the required
competence to a higher level, which could be better in the long term, while
encouraging quick-and-dirty approaches in the near term. The so-called &lt;em&gt;vibe
coding&lt;/em&gt; is not a black-and-white concept; it has a lot of grey tones directly
depending on the awareness, responsibility, and skills of the developer: as
said, it can accelerate in many senses - even to crash against a wall -
increasing in an uncontrolled manner the technical debt when in the hands of the
wrong individual. Even about that, Anthropic recognizes that AI abuse
can negatively impact coding skills and debugging capabilities [6].&lt;/p&gt;&lt;p&gt;Add to this the current very high infrastructure load many networks are
reporting, for which the AI bots currently seem to be the culprit [4]. This
seems like very strange behavior for such botnets, given that web crawlers have
been around since the 90s and should be able to handle infrastructure load
fairly well by now. It seems that AI companies simply aren't fair enough on
their own, or that the training phases of neural nets are definitely more
demanding. Maybe both?&lt;/p&gt;&lt;p&gt;So, what do I see as the future for FOSS development as a whole? I am not a
pessimist as in the cited article. For sure, I see fewer small contributions in
the long term. Today, there is a massive production of AI-slop-based
contributions to many prime-time projects, but I see this as incidental. In
recent years, GitHub-based &lt;em&gt;path of honors&lt;/em&gt; has been a major self-promotion
channel for junior developers, which explains the drift toward low-quality
contributions: devs are (were?) strongly motivated to contribute and find in AI
slop an easy path to that, by creating personal portfolios. That’s also true for
fake security-related reports (see the well-known Curl project case [5] and others).
This is, of course, annoying, but in my view, that’s the result of current AI
hype and should normalize in the mid-term.&lt;/p&gt;&lt;p&gt;Also, in the near future, I see less and less relevance in FOSS projects that
are not sustained by a strong architectural idea, innovation-grade, a large
community, and a consistent development effort (much bigger than a few weeks or
months of work).  That kind of project will become mainly background noise, let
me say. Maybe that could impact whole categories of FOSS software: it is not a
secret that many language hubs are full of packages/modules of dubious quality,
often used because they are available just a use/import directive away. In many
cases, such products will simply be replaced by an AI-based reimplementation. If
the final result will be better or worse in average quality, only the future
will show. For sure, the AIAD will cause a progressive
&lt;em&gt;democratization/popularization&lt;/em&gt; of the development process, giving average
users access to possibilities once unavailable to them: we will probably see the
production of a plethora of small tools and workflows built on agents rather
than finished, refined products, like it or not.&lt;/p&gt;&lt;p&gt;The result could be an increase in FOSS products at the cost of lower average
generality and code quality, with a few high-end, tailored products for
mainstream applications instead. But was this really so different in the past? I
don’t think so. The true difference is probably the increase in quantity in both
sets of products, as potentiated by AI tools: if one does not do her/his
homework, the result is clearly garbage, but that was true before AI, too.&lt;/p&gt;&lt;p&gt;&lt;em&gt;“AI gives us the worst and the best - simultaneously.”&lt;/em&gt;&lt;/p&gt;&lt;p&gt;&lt;em&gt;(Daniel Stenberg, the Curl Mantainer)&lt;/em&gt;&lt;/p&gt;&lt;h2 id=&quot;references&quot;&gt;References&lt;/h2&gt;&lt;ol&gt;&lt;li&gt;&lt;a href=&quot;https://arxiv.org/abs/2601.15494&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Vibe Coding Kills Open Source&lt;/a&gt;.&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://www.eweek.com/news/tailwind-labs-lays-off-engineers-due-to-ai/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Tailwind Labs Lays Off Engineers, Citing the ‘Brutal Impact’ of AI&lt;/a&gt;.&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://www.devclass.com/ai-ml/2026/01/05/dramatic-drop-in-stack-overflow-questions-as-devs-look-elsewhere-for-help/4079575&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Dramatic drop in Stack Overflow questions as devs look elsewhere for help&lt;/a&gt;.&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://www.heise.de/en/news/OpenStreetMap-is-concerned-thousands-of-AI-bots-are-collecting-data-11157359.html&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;OpenStreetMap is concerned: thousands of AI bots are collecting data&lt;/a&gt;.&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-slops/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Death by a thousand slops&lt;/a&gt;.&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://www.anthropic.com/research/AI-assistance-coding-skills&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;How AI assistance impacts the formation of coding skills&lt;/a&gt;.&lt;/li&gt;&lt;/ol&gt;</content></entry><entry><title>How to trust FOSS players and the security implications</title><id>https://lovergine.com/how-to-trust-foss-players-and-the-security-implications.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2026-01-27T17:30:00Z</updated><link href="https://lovergine.com/how-to-trust-foss-players-and-the-security-implications.html" rel="alternate" /><content type="html">&lt;p&gt;More and more, recent (and not too recent) episodes [1-5] nowadays show a hard truth
we already discovered in the Debian project since the end of the 90s. A key
security principle in FOSS code development is ensuring the trustworthiness of
all parties involved, and that’s unfortunately also the weakest part of the
whole chain.&lt;/p&gt;&lt;p&gt;Debian adopted a long time ago a tentative GnuPG-based screening of people
involved in the project through the so-called &lt;a href=&quot;https://en.wikipedia.org/wiki/Web_of_trust&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Web of Trust&lt;/a&gt;.
All developers need
to participate in eyeball meetings to sign each other's public keys after
verifying personal IDs. The more signatures, the more trustworthy. That is
generally mandatory before being granted the privilege of uploading to the main
archive without review. Of course, trust is not automatic, and each volunteer
must demonstrate they have the required skills and good intentions, and embrace
the Debian social contract. This is the annoying, often multi-year process to be
accepted as a contributor.&lt;/p&gt;&lt;p&gt;This process is far from perfect and may also be subject to abuse, as Martin
Krafft demonstrated at a past DebConf, a long time ago. One of the main issues
is that maintainers work on upstream software that generally does not undergo
such a process to accept contributors. As the well-known sad case of the xz
utils demonstrated [6] a couple of years ago, an initial review of pull requests is
not generally enough to ensure that the developer or group does not have evil
intentions in the mid- to long-term. Also, with the best intentions of sane
upstream developers, evil parties can make very creative efforts to hide their
malware code. This is magistrally demonstrated in that episode, which did not
cause major security issues, only by casualties.&lt;/p&gt;&lt;p&gt;The sad reality is that none of the main programming language hubs is really
trustworthy, because literally anyone can register anonymously to upload code
and participate in development teams. Core teams at least review pull requests
before accepting them to avoid major abuses. To address this, enforcing a
worldwide web of trust for all core projects and possibly all software hubs
should be considered a mandatory step to improve security and accountability.&lt;/p&gt;&lt;p&gt;It does not resolve all problems, but it helps. A central authority is not the
answer, as it could create more problems than it solves. Instead, enhance
trustworthiness by encouraging ongoing cross-review by multiple parties.
Establish processes that build developers' trust over time through active
participation and transparent peer review. While key hijacking remains a risk,
this is a separate issue requiring distinct protective measures.&lt;/p&gt;&lt;p&gt;I've written about &lt;a href=&quot;/are-distributions-still-relevant.html&quot;&gt;this related issue before&lt;/a&gt;.
The shift from distributions to
language and upstream hubs shifts software management onto developers and users,
increasing the risk of security incidents from malicious contributions.&lt;/p&gt;&lt;p&gt;That said, like it or not, most FOSS products out there are created/maintained
by single individuals and micro-development teams with no warranties and
questionable durability. I wonder how billion-dollar companies can seriously
consider basing their core business in such conditions, a problem directly
connected to the broader sustainability challenges for FOSS projects. The
progressive spread of the AGPL license and other similar licenses is a symptom
of this type of malaise and should be taken into consideration, as they are
different aspects of the same problem. Security concerns are another key point
that we, as a community, should try to manage better, but my honest thought is
that nowadays, predatory big (and not so big, even) companies (as well as public
bodies too) that use community-driven FOSS code in an unfair manner, without
returning a cent for development and maintenance, are not more acceptable.&lt;/p&gt;&lt;p&gt;Therefore, FOSS communities are not perfect, but many of the culprits are
nowadays on the shoulders of companies and public bodies that are still looking
out their windows instead of being active stewards and promoting reciprocal
collaboration among all involved parties.&lt;/p&gt;&lt;p&gt;Come on, put the money and effort into the sources of your digital profits.&lt;/p&gt;&lt;h2 id=&quot;references&quot;&gt;References&lt;/h2&gt;&lt;ol&gt;&lt;li&gt;&lt;a href=&quot;https://thehackernews.com/2026/01/malicious-pypi-package-impersonates.html&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Malicious PyPI Package Impersonates SymPy, Deploys XMRig Miner on Linux Hosts&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://www.microsoft.com/en-us/security/blog/2025/04/15/threat-actors-misuse-node-js-to-deliver-malware-and-other-malicious-payloads/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Threat actors misuse Node.js to deliver malware and other malicious payloads&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://thehackernews.com/2025/04/nodejs-malware-campaign-targets-crypto.html&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Node.js Malware Campaign Targets Crypto Users with Fake Binance and TradingView Installers&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://www.sonatype.com/blog/rubygems-laced-with-bitcoin-stealing-malware&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Two New RubyGems Laced with Cryptocurrency-Stealing Malware Taken Down&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://www.mycert.org.my/portal/advisory?id=MA-714.022019&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;PHP Pear Vulnerability&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/XZ_Utils_backdoor&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;https://en.wikipedia.org/wiki/XZ_Utils_backdoor&lt;/a&gt;&lt;/li&gt;&lt;/ol&gt;</content></entry><entry><title>The perfect desktop is a matter of points of view, or not?</title><id>https://lovergine.com/the-perfect-desktop-is-a-matter-of-points-of-view-or-not.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2026-01-22T19:40:00Z</updated><link href="https://lovergine.com/the-perfect-desktop-is-a-matter-of-points-of-view-or-not.html" rel="alternate" /><content type="html">&lt;p&gt;I recently learned about an opinionated flavor of the Arch distribution called
&lt;a href=&quot;https://omarchy.org/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Omarchy&lt;/a&gt;, which is basically a collection of desktop
packages built on top of a rolling Arch distribution. Nothing special, but for
the vocal original author of the scripting job at the base of such flavor, who
is, as it happens, for many old-school self-centered geeks out there, the quite
discussed DHH. I will not enter into the merits of the reasons for the dubious
fame of David &amp;quot;DHH&amp;quot; Heinemeier Hansson, which basically stem from some of his
past posts on X/Twitter and some of his questionable ideas.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;/images/wm-vs-de.png&quot; alt=&quot;The great fight between WMs and DEs&quot; /&gt;&lt;/p&gt;&lt;p&gt;I’m not interested in that here. I’m more interested in some spontaneous
thoughts about the hype (well, at least among the very restricted niche group of
Linux desktop fans) around this desktop flavor. It is not something new; the
Hyprland UX is basically an &lt;a href=&quot;https://i3wm.org/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;i3&lt;/a&gt;-like
&lt;a href=&quot;https://en.wikipedia.org/wiki/Tiling_window_manager&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;tiling window manager&lt;/a&gt; with steroids,
based on the current non-Xorg incarnation of Wayland, with a few whistles and
bells.&lt;/p&gt;&lt;p&gt;I have been a long-time Linux desktop user since the 90s, and a tiling window
manager (specifically one of the suckless incarnations, &lt;a href=&quot;https://awesomewm.org/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Awesome WM&lt;/a&gt;) has been my
main desktop for quite a few years. Some years ago, I abandoned such a paradigm
when I finally realized that a pure tiling window manager is a great idea until
it isn't. Basically, most of its &lt;em&gt;pros&lt;/em&gt; (one application per virtual desktop, easy
tiling on big displays, keyboard-driven navigation) can be easily replicated in
a capable desktop environment like any current Gnome version. This has the big
advantage of being ready for use right after installation and of being easily
and fully customizable via plugins. The &lt;em&gt;cons&lt;/em&gt; of a tiling WM are always present,
based on workflows, and there are generally no easy workarounds. The biggest is
the need to find tricks and third-party tools to solve use cases that are not
always trivial (or worse, that are trivial on a DE instead).&lt;/p&gt;&lt;p&gt;A DE has the indisputable advantage of including all batteries for widgets and
customization tools, whereas most (all) WMs require third-party tooling to
manage many disparate configuration snippets, such as Bluetooth, Wi-Fi, hot-plug devices,
auto-sensing of beamers, dynamic multi-display, fast binding of container apps,
accessibility featues, and many others. Too often, also, such WMs require using a
command-line tool or a workaround to perform tasks that are simply part of the common DE experience.&lt;/p&gt;&lt;p&gt;I also remember the pain of using the multiwindow GUI of GrassGis under Awesome,
which was at the time just another type of application under a floating window
manager, instead. When an application opens a new window for every new module
used, well, the UX could become a nightmare under a tiling WM, if you are not
using a 43-inch display. The same goes for virtualized desktops, too: when the
guest and host collide for keyboard use, continuous control switching can
rapidly lead to madness. That’s just a pair of examples to conclude that the
coolness of a desktop implementation is often a matter of perspective and
personal workflows, and I constantly found that a mandatory tiling WM paradigm
is simply less flexible in some practical cases.&lt;/p&gt;&lt;p&gt;To be honest, I find the Omarchy UX to be the typical incarnation of a canonical
WM-based interface for fresh Linux desktop users. Such users are divided into
two classes:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;The class of people who are searching for an exact replica of Windows/macOS
GUI. A no-hope group of people: if something has to look like Windows, have
exactly the same policies and the same applications and icons, even, well,
probably they should stay with Windows: simple and clean. They are the most
critical and vocal complaining users for whom the Linux-on-the-desktop era will
never come.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The class of people who look for something radically different and discover
the keyboard-driven interfaces as something almost magical (without fully
realizing that such an experience can be replicated easily and mostly by using
environment shortcuts and some simple plugins). A trivial secret, I would say.
They are the most enthusiastic about this kind of desktop, but also regular
distro-hoppers (yes, it is an offense for me: distro-hopping is for gamers, not
for workers who need to complete primary tasks daily) from time to time; more often,
they will never admit they are simply playing around, and solving auto-inflicted problems
is part of their game.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Of course, the WM-based desktop paradigm still has its own use cases,
which I group under a very few  limited cases:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;You are using a strictly roll-on distribution, such as Debian sid/unstable,
Arch, Guix, Gentoo, and many other in-development flavors of other mainstream
distributions, including Fedora and OpenSUSE. On such distributions, avoiding
desktop environments reduces the likelihood of encountering temporary problems
after daily upgrades, as some transient (in the order of days/weeks) breakages
can occur. But who is the user of such platforms today? Seriously, I think only
someone actively involved in development and testing should be interested in
such distributions. Today, most desktop apps are distributed as containerized
packages via one of the multiple available hubs for Flatpak, AppImage, Snap,
Docker/Podman. I can’t see the practical advantage of using an unstable
distribution on a daily basis. If you think differently, dude, you have a
problem, and it is not the distribution you are using, but it is what you see in
the mirror. If you are not a YouTuber who needs to produce videos to monetize,
well, you are probably simply using the wrong distribution pointlessly (and
creating your own problems from time to time, which are perfectly part of the
roll-on experience, by manual).&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;You are using an old, low-resource box with limited RAM and cores. A platform
that cannot simply be used with the current desktop environments. I seriously
doubt it could also be used for general computing, indeed. Nowadays, even a web
browser is simply a hungry hog on such platforms. I mean, a dual-core box with 4 GB of
RAM that could be more than 15 years old. If this is your platform, well, a
window manager is perfectly legitimate, but it probably couldn't be used with
Hyprland either. But to do what? I mean, but for installing it and telling it to
friends...&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;You are an old-style lazy geek, anchored to your own configurations, refined
over dozens of years, with very few reasons to change. That’s perfectly
legitimate, but most of those configurations could probably be out of date. I
know, you are still adjusting your Modelines in your Xorg configuration. Well,
dude, probably it’s time to come down the tree in the forest.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Of course, I also tried to install Omarchy on an old box of mine (an 11-year-old
Lenovo ThinkPad L540 with a dual-core i5 and 8GB RAM) that runs perfectly with
the current Debian 13 and Gnome 48. Sadly, it was not even booted to install:
just a dark screen. Good, but not too good, dudes.&lt;/p&gt;&lt;p&gt;And this leads me to the elephant-in-the-room argument for this post. Most users
need stability and, occasionally, up-to-date applications. The average users
need certainty that they can easily install an OS on most platforms and have a
stable UX for a decently long period (let’s say 2-5 years without any
reinstallation in between). The more users, the more stability. The simpler, the
more effective, too.  And that’s the real point most devs (or wannabe experts)
have probably missed in the meantime.
The desktop is a mere tool; it should not require an expertise addiction.&lt;/p&gt;&lt;p&gt;It is not a matter of DE vs WM, but of homogeneity and generality versus good,
but not enough for all. If one has to rediscover the warm water to manage a
configurable tool that, in a DE, is a click-n-point away, it is a failure
in general UX. Of course, even DEs are far from perfect, but too often, WM UX is
far from even being basically complete.&lt;/p&gt;&lt;p&gt;For instance, I can easily manage my full clipboard history with inter-session
persistence thanks to a simple Gnome plugin (i.e., Clipboard Indicator). There is no
equivalent widget in most WMs, but they need to use a third-party tool to
provide something that is almost equivalent, but often incomplete. Well,
Houston, we have a problem! That’s just an example, but the general approach is
clear: if one has to constantly sacrifice immediate, good-enough implementations
to adopt half-finished tools or workarounds to solve basic GUI workflows, WMs
become not accelerators of productivity but defective implementations, and that
has been my constant experience in that regard with WMs. At some point, one has
to set priorities, and after years, my priority has become not to waste time
reinventing the wheel for desktop GUIs. Sorry, guys. There is more than one way
to implement a desktop interface, but many of them can simply become a pain
because they are not flexible enough or incomplete, resulting in continuous
adjustings and workarounds to have something decently working.&lt;/p&gt;&lt;p&gt;And yes, this is another damn opinionated post about
the current &lt;em&gt;Year of Linux on Desktop&lt;/em&gt;. Don't take it too seriously...&lt;/p&gt;</content></entry><entry><title>A Terramaster NAS with Debian, take two.</title><id>https://lovergine.com/a-terramaster-nas-with-debian-take-two.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2026-01-19T13:00:00Z</updated><link href="https://lovergine.com/a-terramaster-nas-with-debian-take-two.html" rel="alternate" /><content type="html">&lt;p&gt;After experimenting at home, the very first professional-grade NAS from
Terramaster arrived at work, too, with 12 HDD bays and possibly a pair of M2s.
NVME cards. In this case, I again installed a plain Debian distribution, but HDD
monitoring required some configuration adjustments to run &lt;code&gt;smartd&lt;/code&gt; properly.&lt;/p&gt;&lt;p&gt;A decent approach to data safety is to run regularly scheduled short and long
SMART tests on all disks to detect potential damage. Running such tests on all
disks at once isn't ideal, so I set up a script to create a staggered
configuration and test multiple groups of disks at different times. Note that it
is mandatory to read the devices at each reboot because their names and order
can change.&lt;/p&gt;&lt;p&gt;Of course, the same principle (short/long test at regular intervals along the
week) should be applied for a simpler configuration, as in the case of my home
NAS with a pair of RAID1 devices.&lt;/p&gt;&lt;p&gt;What follows is a simple script to create a staggered &lt;code&gt;smartd.conf&lt;/code&gt; at boot
time:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;#!/bin/bash
#
# Save this as /usr/local/bin/create-smartd-conf.sh
#
# Dynamically generate smartd.conf with staggered SMART test scheduling
# at boot time based on discovered ATA devices

# HERE IS A LIST OF DIRECTIVES FOR THIS CONFIGURATION FILE.
# PLEASE SEE THE smartd.conf MAN PAGE FOR DETAILS
#
#   -d TYPE Set the device type: ata, scsi[+TYPE], nvme[,NSID],
#           sat[,auto][,N][+TYPE], usbcypress[,X], usbjmicron[,p][,x][,N],
#           usbprolific, usbsunplus, sntasmedia, sntjmicron[,NSID], sntrealtek,
#           ... (platform specific)
#   -T TYPE Set the tolerance to one of: normal, permissive
#   -o VAL  Enable/disable automatic offline tests (on/off)
#   -S VAL  Enable/disable attribute autosave (on/off)
#   -n MODE No check if: never, sleep[,N][,q], standby[,N][,q], idle[,N][,q]
#   -H      Monitor SMART Health Status, report if failed
#   -s REG  Do Self-Test at time(s) given by regular expression REG
#   -l TYPE Monitor SMART log or self-test status:
#           error, selftest, xerror, offlinests[,ns], selfteststs[,ns]
#   -l scterc,R,W  Set SCT Error Recovery Control
#   -e      Change device setting: aam,[N|off], apm,[N|off], dsn,[on|off],
#           lookahead,[on|off], security-freeze, standby,[N|off], wcache,[on|off]
#   -f      Monitor 'Usage' Attributes, report failures
#   -m ADD  Send email warning to address ADD
#   -M TYPE Modify email warning behavior (see man page)
#   -p      Report changes in 'Prefailure' Attributes
#   -u      Report changes in 'Usage' Attributes
#   -t      Equivalent to -p and -u Directives
#   -r ID   Also report Raw values of Attribute ID with -p, -u or -t
#   -R ID   Track changes in Attribute ID Raw value with -p, -u or -t
#   -i ID   Ignore Attribute ID for -f Directive
#   -I ID   Ignore Attribute ID for -p, -u or -t Directive
#   -C ID[+] Monitor [increases of] Current Pending Sectors in Attribute ID
#   -U ID[+] Monitor [increases of] Offline Uncorrectable Sectors in Attribute ID
#   -W D,I,C Monitor Temperature D)ifference, I)nformal limit, C)ritical limit
#   -v N,ST Modifies labeling of Attribute N (see man page)
#   -P TYPE Drive-specific presets: use, ignore, show, showall
#   -a      Default: -H -f -t -l error -l selftest -l selfteststs -C 197 -U 198
#   -F TYPE Use firmware bug workaround:
#           none, nologdir, samsung, samsung2, samsung3, xerrorlba
#   -c i=N  Set interval between disk checks to N seconds
#    #      Comment: text after a hash sign is ignored
#    \      Line continuation character
# Attribute ID is a decimal integer 1 &amp;lt;= ID &amp;lt;= 255
# except for -C and -U, where ID = 0 turns them off.

set -euo pipefail

# Test schedule configuration
BASE_SCHEDULE=&amp;quot;L/../../6&amp;quot;  # Long test on Saturdays
TEST_HOURS=(01 03 05 07)   # 4 time slots: 1am, 3am, 5am, 7am

DEVICES_PER_GROUP=3

main() {
    # Get array of device names (e.g., sda, sdb, sdc)
    mapfile -t devices &amp;lt; &amp;lt;(ls -l /dev/disk/by-id/ | grep ata | awk '{print $11}' | grep sd | cut -d/ -f3 | sort -u)

    if [[ ${#devices[@]} -eq 0 ]]; then
        exit 1
    fi

    # Start building config file
    cat &amp;lt;&amp;lt; EOF
# smartd.conf - Auto-generated at boot
# Generated: $(date '+%Y-%m-%d %H:%M:%S')
#
# Staggered SMART test scheduling to avoid concurrent disk load
# Long tests run on Saturdays at different times per group
#
EOF

    # Process devices into groups
    local group=0
    local count_in_group=0

    for i in &amp;quot;${!devices[@]}&amp;quot;; do
        local dev=&amp;quot;${devices[$i]}&amp;quot;
        local hour=&amp;quot;${TEST_HOURS[$group]}&amp;quot;

        # Add group header at start of each group
        if [[ $count_in_group -eq 0 ]]; then
            echo &amp;quot;&amp;quot;
            echo &amp;quot;# Group $((group + 1)) - Tests at ${hour}:00 on Saturdays&amp;quot;
        fi

        # Add device entry
        #echo &amp;quot;/dev/${dev} -a -o on -S on -s (${BASE_SCHEDULE}/${hour}) -m root&amp;quot;
        echo &amp;quot;/dev/${dev} -a -o on -S on -s (L/../../6/${hour}) -s (S/../.././$(((hour + 12) % 24))) -m root&amp;quot;

        # Move to next group when current group is full
        count_in_group=$((count_in_group + 1))
        if [[ $count_in_group -ge $DEVICES_PER_GROUP ]]; then
            count_in_group=0
            group=$(((group + 1) % ${#TEST_HOURS[@]}))
        fi
    done
}

main &amp;quot;$@&amp;quot;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;To run such a script at boot, add a unit file to the systemd configuration.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;sudo systemctl  edit --full /etc/systemd/system/regenerate-smartd-conf.service
sudo systemctl enable regenerate-smartd-conf.service&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Where the unit service is the following:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;[Unit]
Description=Generate smartd.conf with staggered SMART test scheduling
# Wait for all local filesystems and udev device detection
After=local-fs.target systemd-udev-settle.service
Before=smartd.service
Wants=systemd-udev-settle.service
DefaultDependencies=no

[Service]
Type=oneshot
# Only generate the config file, don't touch smartd here
ExecStart=/bin/bash -c '/usr/local/bin/create-smartd-config.sh &amp;gt; /etc/smartd.conf'
StandardOutput=journal
StandardError=journal
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target&lt;/code&gt;&lt;/pre&gt;</content></entry><entry><title>AI training, copyright and the future of contents creation</title><id>https://lovergine.com/ai-training-copyright-and-the-future-of-contents-creation.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2026-01-11T21:00:00Z</updated><link href="https://lovergine.com/ai-training-copyright-and-the-future-of-contents-creation.html" rel="alternate" /><content type="html">&lt;p&gt;I have already addressed the implications of modern LLMs, specifically their
training, in the context of copyright and licenses for both code and original
content. A 'IANAL' disclaimer applies to this post, but my honest opinion is
that such training is a legitimate type of reading and learning after study,
unless explicitly excluded in licenses among the licensee's rights.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;/images/ai-electric-sheeps.jpg&quot; alt=&quot;AI dreams of electric sheeps&quot; /&gt;&lt;/p&gt;&lt;p&gt;Following the exploitation of LLMs and the AI boom that began in 2022, several
lawsuits and litigations emerged among multiple parties, with a few reaching a
significant milestone through the first court rulings. Note that every country
has a bit different regulations about copyright and fair use, so the current
lawsuites could be only the starting point of a long list of legal actions.&lt;/p&gt;&lt;p&gt;While most of the current lawsuits seem to demonstrate that Anthropic or Meta
had the right to use books bought (in paper or digital form) for LLMs training
(on the basis of the fair use principle), the most problematic aspect instead is
the apparent use of pirated books taken from LibGen and other known piracy
websites, which - if confirmed - can result in potentially destructive damange
for the companies, to compensante authors and pay fees in the order of hundreds
of billions.&lt;/p&gt;&lt;p&gt;The same problems are present in the coding parts: again, using FOSS-licensed
code for training could fall under fair use, but training using private
codebases, as well as proprietary ones, could be equally destructive for the
same companies, as well as for GitHub and Microsoft.
The key point would be demonstrating, without any doubt, the unfair use of
private or pirated content, of course.&lt;/p&gt;&lt;p&gt;Of course, I'm quite sure future licenses for FOSS codebases and documentation
could include an explicit exclusion clause for AI training, which could
jeopardize the legitimation of use even for future FOSS code. I would expect
such a license change, as some projects already explicitly exclude AI-based
contributions. My opinion about such a question is that it could represent
shooting oneself in the foot, due to the pervasivity of AI tools among
developers currently. Adoption of AIAD could represent a boost in development
time if adopted with a healthy dose of skepticism (i.e., a human-in-the-loop
approach). About that, I'm quite convinced of Linus Torvald's point of view: the
point is not who writes the code, but who is technically responsible for it and
ensures the required quality review and supervision.&lt;/p&gt;&lt;p&gt;Moreover, an implication of the current polarization in the AI hype is the
future (present?) crisis of traditional web content providers. A symptomatic
case is the StackOverflow crisis, which will, with high probability, lead to
the end of the service as we know it in the near future.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;images/stackoverflow-graph.webp&quot; alt=&quot;The crisis of StackOverflow&quot; /&gt;&lt;/p&gt;&lt;p&gt;That will have an
impact on future AI training, too, for sure, because SO has been for years a
huge source of knowledge about multiple fields in IT. What if fewer and fewer
people will contribute to Wikipedia and general web content? What if more and
more sources of information were to reserve the right to use their information
for pure human-driven study? Knowledge has not been static in human history; AI
models will need to continuously enrich their training sets and stay up to date.&lt;/p&gt;&lt;p&gt;It would be grotesque if the whole AI hype were brought to a halt by such
copyright-based legal questions (even if I'm pretty sure a fully fair training
would be possible now for such companies, who knows the impact of a more limited
approach on the final result?). Surely, this seems the most serious threat to the
future of such companies and the whole AI-based solutions.&lt;/p&gt;&lt;p&gt;The only true solution to such a threat is finally having a true open training
model, which details sources and the whole process of training with full
transparency, something that even the so-called open AI models are still far to be
ready to provide.&lt;/p&gt;&lt;h2 id=&quot;references&quot;&gt;References&lt;/h2&gt;&lt;ol&gt;&lt;li&gt;&lt;a href=&quot;https://www.npr.org/2025/09/05/nx-s1-5529404/anthropic-settlement-authors-copyright-ai&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Anthropic settles with authors in first-of-its-kind AI copyright infringement lawsuit&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://www.anthropiccopyrightsettlement.com/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Anthropic Copyright Settlement Website&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://www.joneswalker.com/en/insights/blogs/ai-law-blog/why-anthropics-copyright-settlement-changes-the-rules-for-ai-training.html?id=102l0z0&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Why Anthropic’s Copyright Settlement Changes the Rules for AI Training&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://www.technologyreview.com/2025/07/01/1119486/ai-copyright-meta-anthropic/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;What comes next for AI copyright lawsuits?&lt;/a&gt;&lt;/li&gt;&lt;/ol&gt;</content></entry><entry><title>This was for every one: about the crisis of the web</title><id>https://lovergine.com/this-was-for-every-one-about-the-crisis-of-the-web.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2025-12-25T15:30:00Z</updated><link href="https://lovergine.com/this-was-for-every-one-about-the-crisis-of-the-web.html" rel="alternate" /><content type="html">&lt;p&gt;I just finished reading the delightful book by Sir Tim Berners-Lee, titled &lt;em&gt;This
is for Everyone&lt;/em&gt;, published this year. It is a trip, long, almost 400 pages,
about the origin and evolution of the World Wide Web, seen by those who
conceived and pushed it from the start. The entire first part of the book is
dedicated to the history of the web, the W3C, and the Web Foundation's
operations as we have known them in the first 30 years of its development, from
1989 onwards.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;/images/timbl_tife.jpg&quot; alt=&quot;This is for everyone&quot; /&gt;&lt;/p&gt;&lt;p&gt;I was there at the very beginning of the 90s: I was connected to the Internet
since 1991, and reading such a book for a good part has been an emotional trip
in my memory of those events and people. He is a visionary and an idealist who
fought for an extended period to prevent his WWW creature from being intercepted
and disrupted by for-profit interests.&lt;/p&gt;&lt;p&gt;It happened almost from the start, when first NCSA, then Netscape, and Microsoft
tried one after the other to change the whole idea of openness into something
proprietary, driven through the same scheme of embracing, extending, and
extinguishing. In practice, the complete negation of standard and openness, with
a clear goal in mind: obtaining users' lock-in into proprietary products,
clearly for profit.&lt;/p&gt;&lt;p&gt;Tim provides evidence on multiple critical aspects of the current incarnation of
the net as we know it today and over the last 20 years or more. They are both
technical and social defects or drifts. The web is no longer what we learnt to
know in its first years of existence. The start of the end of the original
web concept was the mobile-first approach, which relegated the use of a regular
computer to a second-class experience for most users. Most of the digital-native
people never used a computer to access the network, and that user experience
deeply affects the current vision of the web.&lt;/p&gt;&lt;p&gt;For years, nowadays, a browser has not been the main program for accessing
content and services. Social networks are mostly not interoperable because
companies have little interest in having their users leave the walled gardens of
their apps. Using a browser and potentially exiting the company's services to
access other servers and spaces is tolerated, but is perceived as damaging
profits. That's simply because users are not users, but customers. The result is
&lt;a href=&quot;https://lovergine.com/the-shattered-internet.html&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;the shattered Internet about which I already wrote&lt;/a&gt;: the W3C standards are still
relevant, but embedded in applications and frameworks that enrich and upset the user
experience with proprietary workflows and extensions.&lt;/p&gt;&lt;p&gt;An emblematic case is Apple, which has, in practice, abandoned its WebKit engine
and Safari browser in favor of apps and proprietary services to monetize
customers and companies.&lt;/p&gt;&lt;p&gt;The concrete risk is that the whole web and its standards would become a
marginalized component of the net, while most users are confined to walled-off
realms of proprietary services and social networks. The recent AI innovation can
mark the definitive end chapter of web content creation and search as we have
used them over the last 30 years. More and more users will limit themselves to
AI-provided overviews instead of collecting and consulting multiple sources of
information and independent services. That will also have a concrete impact on
revenues and interest in content creation and provision at large.&lt;/p&gt;&lt;p&gt;The second part of the book is fully dedicated to all such problems: the impact
of social networks, the last few years of generative AI, the BigCo dominance,
and includes all Tim's worries for the foreseeable future.  He's an idealistic,
optimistic, and positive guy due to his past experiences.  However, he also has
a good dose of sane realism. He understands that the path is nebulous and full
of dangers (specifically, the AI path is highly polarizing and can hide multiple
issues at many levels).&lt;/p&gt;&lt;p&gt;He sees in the indie web, and specifically in open and well-structured
distributed standards (such as the ActivityPub protocol), a possible way to
change the present and future by favoring interoperability and independence. A
concrete proposal is the Solid standard for personal data wallets (or pods in
Solid terminology) under complete user control for accessibility by third-party
services. Such a standard is still in its infancy, but the true problem I see is
the trustworthiness of involved parties, both companies and governments.
Trusting is the key, and maybe we all individually lost such a superpower a long time ago.&lt;/p&gt;&lt;p&gt;Creating a corpus of rules to manage all such technologies and ensure ethical
behavior can be a desperate illusion; the only concrete alternative would be to
opt out, at the cost of exclusion from the social context (not only the digital
one). But I agree there is no other way to recover the original idea of the web.
The AI technologies are even more polarizing, among doomers and boomers, with a
bumpy road ahead. For sure, open protocols and distributed multi-peer services
are the inevitable starting point, but they won't be enough.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;&amp;quot;It was not enough simply to release new technology and hope for the world to
improve. You had to develop technology and society together. You really had to
fight, in a principled and continuous way, for human rights. The web offered
people a platform for their voices to be heard, reducing the cost of publishing
and distributing information to effectively nothing. But, used improperly, it
could also be turned into a tool of surveillance and control.&amp;quot;  (timbl)&lt;/p&gt;&lt;/blockquote&gt;&lt;h2 id=&quot;references&quot;&gt;References&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://us.macmillan.com/books/9780374612467/thisisforeveryone/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Tim Berners-Leei, &lt;em&gt;This is for everyone:The unfinished story of the World Wide Web&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://www.edelman.com/sites/g/files/aatuss191/files/2025-01/2025%20Edelman%20Trust%20Barometer_U.S.%20Report.pdf&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;2025 Edelman Trust Barometer: Trust and the Crisis of Grievance&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://solidproject.org/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;The Solid project&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;</content></entry><entry><title>Installing Debian on a USB stick for a Terramaster NAS</title><id>https://lovergine.com/installing-debian-on-a-usb-stick-for-a-terramaster-nas.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2025-10-15T18:00:00Z</updated><link href="https://lovergine.com/installing-debian-on-a-usb-stick-for-a-terramaster-nas.html" rel="alternate" /><content type="html">&lt;p&gt;I recently bought a basic NAS for home use. The NAS is a nice &lt;a href=&quot;https://shop.terra-master.com/it-it/products/f2-425&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Terramaster
F2-425&lt;/a&gt;, which is a very
basic RAID1-only NAS with a decent CPU and 2.5Gb network. Terramaster allows
users to either use its custom Linux-based TOS or install any other operating
system supported by the x86_64-based platform. Note that this model does not
mount any NVME unit for the OS, as for the F2-424.&lt;/p&gt;&lt;p&gt;Common choices include TrueNas, Proxmox, or any other Linux-based distribution.
My choice has been a plain Debian stable distribution because I do not have
special requirements and prefer a lightweight CLI-only solution over a
dashboard. The F2-425 does not have NVME cards, only regular HDDs/SSDs.
However, when installing an independent OS, as in my case, you can immediately
use an external USB stick for the system, and dedicate HDDs to data. The unit
even has a tiny (264MB) internal USB stick for installing TOS, but I simply
used a decent 16GB SanDisk thumb drive. The clear advantage is the possibility
of having the base system and data perfectly separated, and multiple copies of
the stick for safety.&lt;/p&gt;&lt;p&gt;Of course, the installation of such a system can be done without using the
Debian installer at all, so I'm describing here how to perform such an
installation for my future reference and for other geeks. Of course, you need a
running Linux system with &lt;em&gt;debootstrap&lt;/em&gt; installed. The process involves
partitioning the stick in GPT mode, installing the base system and EFI, and
configuring the system to finalize a bootable system with the necessary
software to connect the NAS to the network, including OpenSSH.&lt;/p&gt;&lt;p&gt;Note that the 2.5 Gb Ethernet is a RealTek, so a firmware blob
(firmware-realtek package on Trixie) is required to properly work with that.
Alternatively, another of the USB ports could also be used to add a wireless
connection. The OS stick could be simply mounted on the internal port, but it
requires opening the chassis for that and using a tiny stick.&lt;/p&gt;&lt;p&gt;At power-on, the internal TOS dongle automagically boots up, so connecting an
HDMI display and a keyboard is required to change the setup to boot the Debian
EFI image on the stick. On F2-425, press the &amp;lt;F12&amp;gt; key to access the AMI setup
and change boot priorities. There are always slight differences among AMI BIOS
setups, so it is required to find the right key to access settings and change
boot options.&lt;/p&gt;&lt;p&gt;Let's consider /dev/sde as the name of the USB stick device on the host where
it will be prepared. A GPT partition can be created via GNU parted, as follows:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;parted /dev/sde 	# to create EFI and root primary partition
partprobe --summary /dev/sde
sfdisk -l /dev/sde
mkfs.vfat -F 32 /dev/sde1
mkswap /dev/sde2
mkfs.ext4 /dev/sde3&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Once done, installing the base system is immediate.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;mount /dev/sde3 /mnt
mount /dev/sde1 /mnt/boot/efi
debootstrap trixie /mnt

mount -o bind /dev /mnt/dev
mount -t devpts devpts /mnt/dev/pts
mount -t proc proc /mnt/proc
mount -t sysfs sysfs /sys /mnt/sys
mount -t tmpfs run /mnt/run

cp /etc/apt/sources.d/debian.sources /mnt/etc/apt/sources.d/.
cp /etc/resolv.conf /mnt/etc/.

echo &amp;quot;nas&amp;quot; &amp;gt; /mnt/etc/hostname
sed -i -e 's/localhost$/localhost\n127.0.0.1\tnas/' /mnt/etc/hosts

rootfs=$(blkid /dev/sde | grep TYPE=\&amp;quot;ext4\&amp;quot;|awk '{print $2}'|cut -d\&amp;quot; -f2)
vfat=$(blkid /dev/sde|grep TYPE=\&amp;quot;vfat\&amp;quot;|awk '{print $2}'|cut -d\&amp;quot; -f2)
swap=$(blkid /dev/sde|grep TYPE=\&amp;quot;swap\&amp;quot;|awk '{print $2}'|cut -d\&amp;quot; -f2)

cat &amp;gt;/mnt/etc/fstab &amp;lt;&amp;lt;EOF
UUID=$rootfs / ext4 noatime,errors=remount-ro 0 1
UUID=$vfat /boot/efi vfat noatime,umask=0077 0 1
UUID=$swap none swap sw 0 0
EOF

chroot /mnt
apt update
apt upgrade -y
apt install grub-efi-amd64 linux-image-amd64 ssh \
        firmware-misc-nonfree \
        firmware-realtek xfsprogs rsync pmount \
        gddrescue screen util-linux-extra bash-completion \
 	mdadm  parted smartmontools htop ntp unattended-upgrades sudo
useradd -m -G sudo -s /bin/bash -C 'Your Name' your_username 
passwd your_username
adduser your_username plugdev
apt install tzdata locales
dpkg-reconfigure locales
grub-install --target=x86_64-efi --force-extra-removable /dev/sde
update-initramfs -u
apt clean
exit # leave the chroot
umount /mnt/run
umount /mnt/sys/firmware/efi/efivars
umount /mnt/sys
umount /mnt/proc
umount /mnt/dev/pts
umount /mnt/dev
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Note that required HDDs can be easily installed later. I manually configured
the two disks with GNU parted for a GPT Linux RAID partition.  After booting
with the stick, a simple install of the md array support suffices.  Typically,
the USB stick runs as &lt;code&gt;/dev/sdd&lt;/code&gt;&lt;/p&gt;&lt;pre&gt;&lt;code&gt;sudo mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
mkfs.xfs /dev/md0
mkdir /data
data=$(blkid /dev/md0|grep TYPE=\&amp;quot;xfs\&amp;quot;|awk '{print $2}'|cut -d\&amp;quot; -f2)
echo &amp;quot;UUID=$data /data xfs defaults 1 1&amp;quot; &amp;gt;&amp;gt;/etc/fstab&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;In order to allow shutting down by pressing the power button, it is required to
configure &lt;code&gt;systemd-logind&lt;/code&gt; as follows.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;sed -i -e 's/^#HandlePowerKey=poweroff'/HandlePowerKey=poweroff/ \
       -e 's/^#HandlePowerKeyLongPress=ignore/HandlePowerKeyLongPress=ignore/ \
    /etc/systemd/logind.conf

systemctl restart systemd-logind.service&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;It could also be a good idea to stop the periodic auto-scan on the RAID volume
for big disks, which can take ages to run.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;sed -i -e 's/^AUTOCHECK=true/AUTOCHECK=false/' /etc/default/mdadm
systemctl restart mdmonitor.service&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The network configuration depends definitively on the type of connection used
and the home network setup. In my case, the NAS uses a static IPv4
address, so it can be configured through &lt;code&gt;ifupdown&lt;/code&gt;, and it is only necessary
to correctly write the &lt;code&gt;/etc/network/interfaces&lt;/code&gt; for the &lt;code&gt;enp1s0&lt;/code&gt; Realtek 2.5Gb
Ethernet interface. Not that it requires a non-free firmware blob to run.&lt;/p&gt;&lt;p&gt;After the initial syncing, a series of software to better manage the NAS can be
installed, but that is optional and can be the subject of a different post. For
sure, for better convenience, a copy of the USB stick with the complete
configuration is a good idea, to allow a fast recovery in case of failures.&lt;/p&gt;</content></entry><entry><title>A call to minimalistic programming</title><id>https://lovergine.com/a-call-to-minimalistic-programming.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2025-09-10T17:00:00Z</updated><link href="https://lovergine.com/a-call-to-minimalistic-programming.html" rel="alternate" /><content type="html">&lt;p&gt;Minimalism in development is a forgotten virtue of our time that should gain
more attention. A straightforward summary of some minimalism principles is
available &lt;a href=&quot;http://minifesto.org/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;here&lt;/a&gt;. Briefly, the principles of minimalism
in Software Engineering can be summarized as follows, based on the manifesto for
minimalism.&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;em&gt;Fight for Pareto's law&lt;/em&gt;: look for the 20% of effort that will yield 80% of the results.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Prioritize&lt;/em&gt;: minimalism isn't about not doing things but about focusing first on the important.&lt;/li&gt;&lt;li&gt;&lt;em&gt;The perfect is the enemy of the good&lt;/em&gt;: first do it, then do it right, then do it better.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Kill the baby&lt;/em&gt;: don't be afraid of starting all over again. Fail soon, learn fast.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Add value&lt;/em&gt;: continuously consider how you can support your team and enhance your position in that field or skill.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Basics, first&lt;/em&gt;: always follow top-down thinking, starting with the best practices of computer science.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Think differently&lt;/em&gt;: simple is more complicated than complex, which means you'll need to use your creativity.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Synthesis is the key to communication&lt;/em&gt;: we have to write code for humans, not machines.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Keep it plain&lt;/em&gt;: try to keep your designs with a few layers of indirection.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Clean kipple and redundancy&lt;/em&gt;: minimalism is all about removing distractions.&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;Most of those principles are coherent with each other and relate heavily to the
well-known Unix &lt;a href=&quot;https://en.wikipedia.org/wiki/KISS_principle&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;KISS principle&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;An extended and fascinating book about the practical application of such
principles is Eric S. Raymond's &lt;a href=&quot;http://www.catb.org/~esr/writings/taoup/html/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;&lt;em&gt;&amp;quot;The Art of Unix Programming&amp;quot;&lt;/em&gt;&lt;/a&gt;, which I
strongly recommend reading. I can also recommend a now-classic volume on the
same topic by John Ousterhout, &lt;a href=&quot;https://web.stanford.edu/~ouster/cgi-bin/book.php&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;&lt;em&gt;&amp;quot;A Philosophy of Software Design&amp;quot;&lt;/em&gt;&lt;/a&gt;. Both outline
practical examples of how minimalism in design can be effectively embraced, with
a focus on doing the right thing sooner rather than later.&lt;/p&gt;&lt;p&gt;The same principles could (or maybe should) be applied even to programming
languages, but this is often a neglected aspect of such a minimalistic approach.
Note that one of the most successful languages of all time is the C language,
which indeed has a straightforward syntax and, as such, cannot be easy to use
correctly (the principle is that what is simple is not necessarily easy, too).
That's because the programmer needs to create her/his own abstractions and
layers to build her/his vision of a software design. It seems that this is
precisely the opposite of the C++ or Java approach, where the entire
specification spans thousands of pages, and many high-level abstractions are
integral parts of the language. The same can be applied to Python nowadays,
which started as a simple language, more readable and clean than Perl, but now
has a wide and articulated specification. Again, hundreds of pages are now
needed to describe a once-simple language, where tons of new features and
abstractions have been added to enrich its expressiveness.  If one considers its
standard libraries and modules, the actual situation appears even worse.  Can
such an approach be considered &lt;em&gt;easier&lt;/em&gt;? I don't think so. Let me say: how can a
program be considered simple if it relies on hundreds (or even thousands,
including dependencies recursively) of external modules, as well as hundreds
of syntactical constructs and glues?  Some languages also
manage multi-versioned dependencies, allowing a program to cross-depend on
multiple editions of the same module (yes, JavaScript, I'm talking about you),
with the concrete possibility of introducing obscure bugs as a result. At the
opposite extreme, there is the consideration that we only know and deeply
understand what we make.&lt;/p&gt;&lt;p&gt;Minimalism also means actively seeking a balance between these two opposing
approaches, because reusing third-party modules and packages can be an immediate
solution to deadline urgencies, but can also potentially introduce instability
and dependencies on unmaintained software in the long run.
Long dependency chains where changes happen independently of the main program
focus and are introduced by third-party motivations and reasons - often with wrong
timing for depending projects - can cause breakages at multiple levels.&lt;/p&gt;&lt;p&gt;Of course, to reach
the right tradeoff, a few things need to be considered: every single programmer
could not be smarter than a lot of libraries and modules out there, where
multiple developers could have spent hours/weeks/months, or even years refining
them. That's true, but it is also true that not all libraries or modules are
written with the same level of quality and effort. For instance, we all know
cases of elementary modules available for Node that could be easily avoided, and
instead are imported for some form of laziness in development.  Even, sometimes
features that need to be used could be only a small portion of the whole
library/module, which could be reimplemented with a very reasonable effort and
time. This approach could be amplified in modern times when AI tools could
significantly increase productivity in such cases. I would simplify these
concepts with some additional mottos:&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;em&gt;Limit your external dependencies&lt;/em&gt;: avoid depending on modules or libraries
that are not strictly required to significantly reduce the total development
time, are not rock stable for their interfaces and features, and do not have
a clear and stabilized roadmap.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Reproducibility of the software stack is a must&lt;/em&gt;: these days,
&lt;a href=&quot;https://en.wikipedia.org/wiki/Software_supply_chain&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;a SBOM&lt;/a&gt; has
become recommended/mandatory, but it should not only consist in a documentation of external
dependencies and their versions, but also the full process of building a
runtime environment should be fully defined and consistent for the long
term.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Do not follow the last oh!-so-cool technology&lt;/em&gt;: while that could be done for
an amateur project to develop during spare time, it is not a good idea
depending on a technology whose future is not clearly stated, with a
well-established development team and proven sustainability in the long
term. I consider a risk even depending on a single company project, and even
more if it is considered a startup.  Synthetically, this can be generically
considered as minimalism in coding style.&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;Moreover, if you are going to use a well-established framework, such as Django,
for developing your mid-to-long-term web project, it is probably better than
using the latest Nodejs-based framework created six months ago that seems the
latest 'big thing'. But that's probably only common sense. Instead, ask yourself
if your project should be created from scratch using a simple &lt;em&gt;jamstack system&lt;/em&gt;
and some microservices for well-defined and minimal parts. In many cases, that
is more than enough for too many CMS-based sites out there: indeed, I
continuously ask myself why a lot of websites are still based on WordPress, when
most of them could be easily converted into a handful of static pages and simple
JavaScript snippets that they will use in any case. This can be declined in
terms of minimalism in defining computing architectures, which can also allow
scaling up applications more easily.&lt;/p&gt;&lt;p&gt;So minimalism principles can be considered at multiple levels: for programming
languages, libraries, architectures, and design. However, they require skills,
in-depth research, and a significant amount of time to dedicate to continuous
refactoring and meditation about viable alternatives. And that's probably the
key point: developers with deadlines and urgency imposed by PMs are too often
tempted to follow the easiest and richest paths and provide a solution of any
kind without too much meditation on the final balance among efforts, quality,
efficiency, and durability of results.&lt;/p&gt;&lt;p&gt;Of course, about minimalism, an extraordinary citation is due for the whole
&lt;a href=&quot;https://suckless.org&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;suckless effort&lt;/a&gt; on the uncompromising minimalism side.
And &lt;a href=&quot;https://motherfuckingwebsite.com/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;why not?&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;Ok, ok, I'm joking. But you got the point.&lt;/p&gt;</content></entry><entry><title>About languages and tools: the walking dead and other legends</title><id>https://lovergine.com/about-languages-and-tools-the-walking-dead-and-other-legends.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2025-05-08T13:00:00Z</updated><link href="https://lovergine.com/about-languages-and-tools-the-walking-dead-and-other-legends.html" rel="alternate" /><content type="html">&lt;p&gt;I'm writing this post to react to one of the many articles and threads about the
presumed death of this or that programming language, library, framework, or
tool. What that article was about and who wrote it is secondary. I could
synthesize my idea by citing a well-known joke by Mark Twain: &amp;quot;The rumors about
my death are greatly exaggerated.&amp;quot;&lt;/p&gt;&lt;p&gt;Let me use a &lt;em&gt;synecdoche&lt;/em&gt; rhetorical expedient and limit what follows to
programming languages. Of course, this post is not absolutely limited to them;
it could be applied with little effort to libraries, tools, frameworks, content
management systems, and many other tools of common use among developers.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;/images/walking-dead-coding.png&quot; alt=&quot;The Walking Dead coders&quot; /&gt;&lt;/p&gt;&lt;p&gt;Any developer who has been around enough knows that the death hoaxes about the
end of a programming language are a common refrain that returns almost every
year for most of them. The nude truth about programming languages is that
developers follow trends and fashions. Job recruitments influence such trends,
as well as some application categories and technologies that appear from time to
time.  Without any specific preference, there are currently tons of languages
that still have significant (or even not so large, but still sustainable)
communities that passed in the rearguard because their trendy momentum has been
in the past. A lot of people mistake the end of convulse development periods
(or, even worse, the absence of headlines on major tech news sites) for death. I
could cite many such products that were considered dead a long time ago and
still see one or more releases per year. From stability to death, it is often a
matter of points of view.&lt;/p&gt;&lt;p&gt;I would not refer to Fortran, Cobol, Ada, Prolog, Lisp, etc., which have been
around for 60-70 years. For me, all those are clearly niche languages that still
have their own use in specific domains, and that has been true at least in the
last 40 or 50 years. In most cases, they are not simply updated for features or
even able to manage common applications or programming patterns of the modern
era.&lt;/p&gt;&lt;p&gt;Who would try to write a web framework in Fortran or Cobol? Oh well, you
probably don't know  &lt;a href=&quot;https://fortran.io/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Fortran.io&lt;/a&gt;,
an MVC web framework written in Fortran90. Or even any of the full-stack web
frameworks written in Cobol.
So, it is better to say that for such languages, some applications are purely
intellectual challenges, drafts, and sketches that no one would seriously(?)
consider in production environments. That does not imply that such products are
at a dead end, but only that they are not considered for those applications (but
could be for other ones).&lt;/p&gt;&lt;p&gt;But for them, there are even more recent languages that gained the stage about
20-30 years ago or less, such as Java, Perl, Ruby, or PHP, which are still in
use in production environments but in declining popularity. A special case is
C/C++ and its variants, which most consider low-level languages at a dead-end
but are actively used in many application domains. Today, Rust is considered by
general rumors to be their natural replacement, but again, there is no evidence
that it will truly be so in the future. Often, in the past, what appeared to be
an ineluctable success in a certain period revealed a pure illusion to be
replaced by the next language of dreams.&lt;/p&gt;&lt;p&gt;So what? A dose of sane realism is genuinely required. Developers are voluble
and suffer from early love like teenagers. What today seems like the way to go
could only reveal a dazzle a few years (or even months) later. Being a bit
conservative could help, but the whole &lt;em&gt;silver bullet idea&lt;/em&gt; for tooling is
auto-lesionism. There is not one ring to rule them all. Simply, there is not a
single language to win in all fields, and the skill of being able to switch
comfortably among multiple ones (possibly finding the most helpful for a
specific goal or application domain) is the true superpower of a developer.
That's specifically true if such languages expose different programming patterns
and abstractions. All the rest is for gossip and opinions. Sometimes, a specific
package or framework declares the convenience of using the language (do you
remember the whole Ruby-on-Rails momentum?), as well as the existence of a very
specific language feature (e.g., Erlang efficiency in concurrency). That is the
true basis of a reasoned choice for an implementation.&lt;/p&gt;&lt;p&gt;That said, the only concrete problem nowadays is the job market. A learning plan
that includes only good-but-old, as well as for the opposite, only too recent
tools, could be equally wrong. The job opportunities could be equally few for
both. It seems that the most convenient language is one with a vast community,
but unfortunately, that could be a transient status: at the very beginning of
the third millennium, it seemed that Perl was the language of choice for web
applications, which became clearly not the case just a few years later. So what?
Well, a grain of salt is due in any case, but what seems like the current
primary choice could become a language of the past in just a few years.&lt;/p&gt;&lt;p&gt;A complete developer should at least know at a non-trivial level a
system programming language (e.g., C/C++, or Rust), as well as Python,
JavaScript, and possibly also a pure functional one (e.g, Scheme, Clojure, or
Haskell). Of course, moving from a non-trivial level to the guru level is a
matter of time and experience, and it could also never happen in practice for
each of them.  The more languages and programming paradigms you dominate,
the better for you.&lt;/p&gt;&lt;p&gt;What will be the next dead language in the near future that still seems to be
the current Big Thing? I have some suspicions, but I keep them to myself.&lt;/p&gt;</content></entry><entry><title>Refurbish to fight against planned obsolescence</title><id>https://lovergine.com/refurbish-to-fight-against-planned-obsolescence.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2024-09-25T18:30:00Z</updated><link href="https://lovergine.com/refurbish-to-fight-against-planned-obsolescence.html" rel="alternate" /><content type="html">&lt;p&gt;The planned obsolescence of computers and other IT electronic equipment is a
well-known plague of our age. For years, I stopped buying new computers and
prefer refurbished ones whenever possible. That includes all my personal ICT
boxes, and even at work, I try to spin out the life cycle of the equipment in
use under my direct management.  Proprietary OSes often limit the lifespan of IT
equipment, but in some cases, vendor-independent FOSS software can replace the
original one at End of Life (EOL). This is beneficial because FOSS software is
often more lightweight, customizable, and has a longer support life, thereby
extending the usability of your equipment.&lt;/p&gt;&lt;p&gt;For instance, I'm writing these notes on a dual-processor Lenovo Thinkstation
C30 refurbished workstation that left the factory 10 years ago with plenty of
RAM I bought 4 years ago. Thanks to the use of SSD storage instead of the old
HDDs, I hope to use it for at least another 4-5 years. My living room Thinkpad
L540 refurbished laptop is the same age, again with a replaced SSD, and I bought
it in the same period.  My main office personal HPZ320 workstation has been
around for almost the same number of years, while my office laptop is a damn new
Thinkpad X1 Carbon 5th-Gen of 2017. Both of them were new at buying time.&lt;/p&gt;&lt;p&gt;Of course, all of them run Debian GNU/Linux and have been upgraded regularly
during the years. This is essential. Otherwise, the well-known proprietary OSes
will obsolete your boxes, and you will be screwed.  Generally, this is an easy
task because reasonably aged computers are better supported by FOSS kernels. It
would be better to avoid well-known vendors with problematic or unsupported
devices.&lt;/p&gt;&lt;p&gt;I have even run a couple of aged Seagate Personal NASes in EOL for years, thanks
to installing a plain Debian for ARMv7 processors distribution instead of the
original Seagate one. I think they have been around for almost ten years, and
one is a used unit. Of course, HDDs changed in the meantime.&lt;/p&gt;&lt;p&gt;Some servers still in use are the rock-solid IBM xSeries.They were born with a
Suse Linux ES 11 and are still around and doing their jobs under Debian. Of
course, the trade-off between power consumption, budget, and goals must be
rigorously considered.&lt;/p&gt;&lt;h2 id=&quot;plan-for-the-future-in-advance&quot;&gt;Plan for the future in advance&lt;/h2&gt;&lt;p&gt;In order to run computers for a long time, I found it essential to buy with more
RAM than needed at the moment of buying (even in the case of new boxes). You
will need more, and assume you still don't know. Possibly, the same is true for
the number of cores. If you buy a refurbished computer, it is inevitably a
professional/business unit you would never find in a shop for home users. That's
better because they generally have superior quality and decent support in the
Linux kernel.  In the business market, note that many companies retain computers
for an extended guaranteed time only (by internal policy), so you will always
find 3-5 years-old equipment recycled by medium/big companies. For personal use,
they are more than enough and exceptionally usable with a FOSS OS.&lt;/p&gt;&lt;p&gt;Even note that current computers (but for GPUs) are comparable to reasonably old
computers (let's say the last 10-12 years) in terms of the number of cores,
speed of RAM, and bus throughput. They are faster, but not so much. Their
average power consumption should be lower, but Moore's law is dead, sorry. So,
buying new computers is quite pointless in most cases, and even worse if you
purchase a consumer low-end computer for a limited budget. A refurbished box is
generally the best option because it is cleaned, incorporates ad hoc changes
(e.g., a new SSD for old units), is tested, and includes a one-year warranty by
its vendor (generally, but just check before buying). Even the refurbished
computer was at least an SOHO one, not a home product, so it is a much better
option.&lt;/p&gt;&lt;h2 id=&quot;be-redundant-be-safe&quot;&gt;Be redundant, be safe&lt;/h2&gt;&lt;p&gt;I never base my daily activities on a single box. At home and at work, I always
have forklifts and replacements for emergencies. You cannot trust old (or even
new) units, and not for sure in the long term. The same goes for parts such as
HDDs or SSDs. &lt;em&gt;Ça va sans dire&lt;/em&gt; that you need to take care of your data and
backup with required redundancy, too.&lt;/p&gt;&lt;h2 id=&quot;missions-impossible&quot;&gt;Missions impossible&lt;/h2&gt;&lt;p&gt;As said, the refurbishing possibility is minimal for GPUs and in some application
domains. I'm not a gamer, but there are rumors about forced obsolescence every
few years for commercial reasons triggered by games and vendors. Even for
high-performance computing, any GPU before late 2014 is not more truly
&lt;a href=&quot;https://en.wikipedia.org/wiki/CUDA#GPUs_supported&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;supported or usable&lt;/a&gt;. Again,
it is partially a matter of hardware, but proprietary software, including SDKs,
renders old GPUs obsolete.&lt;/p&gt;&lt;p&gt;Nowadays, the 2016 generations of GPUs with 8GB of VRAM are basically to be
retired for many applications: that was the standard size at the time, but now
vendors have decided that 8GBs or less is too few for any practical use. Too
bad.&lt;/p&gt;&lt;p&gt;Smartphones are another area where planned obsolescence is the rule. The most
expensive ones have 7 years of software support, but most can only be upgraded
to a pair of major versions. Basically, you should change your phone every 3-5
years because security support is limited in time. But the average replacement
cycle seems less than 3 years, anyway. Often, an older phone can still be used,
but it is at risk for security support. Too bad, again.&lt;/p&gt;&lt;p&gt;The same happens for many so-called smart electronic equipment: having a box
permanently connected to the network and missing any security support is
generally a bad idea, and only a small minority of them can be upgraded to FOSS
firmware. Cameras, TVs, routers, access points, and many other objects in our
homes have a sort of implicit counter for end-of-life that often stops before
the end of the hardware parts. You need to be very selective in equipment
vendors.&lt;/p&gt;&lt;p&gt;In many cases - guess what? - such devices now run an embedded version of
GNU/Linux that could be community-supported after the end of vendor support if
the vendor provided enough information appropriately, which is rarely a
possibility. In most cases, the vendor only has a public archive of third-party
FOSS source packages, not the entire build system, so community support is not
practically viable. Even any development documentation is usually totally
missing, just to discourage people from following that path. In many cases, the
devices include totally proprietary chipsets and drivers, so there is no way.
That's even the reason why Android is an almost-open mobile operating system.
Too bad, take three.&lt;/p&gt;&lt;h2 id=&quot;will-it-change-in-the-future&quot;&gt;Will it change in the future?&lt;/h2&gt;&lt;p&gt;In general, this battle was lost years ago. Expanding or repairing a laptop
could now be challenging due to the accurate practices of many vendors to
discourage self-intervention. That's true for a lot of other equipment, too.
They are always ready to obsolete your products more often than you are
prepared. I'm a pessimist about the future of this war.&lt;/p&gt;&lt;p&gt;&lt;em&gt;They're coming outta the goddamn walls&lt;/em&gt;&lt;/p&gt;</content></entry></feed>