Discussion:
[ale] [systemd] Boot speed
leam hall via Ale
2018-02-17 13:58:22 UTC
Permalink
Possibly separating this discussion out into component parts. We'll
see how it goes.

I don't see boot speed as a game changer for systemd, even if it is a
lot faster. If you're booting your desktop then you're probably
already used to "push the power button, hit the head, grab some
coffee" routine. If you system isn't up by then maybe there's an
issue.

For servers, if you really want uptime, why aren't you redundant?
Reboot time is again not an issue if the service stays up.
_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
Adrya Stembridge via Ale
2018-02-17 14:01:04 UTC
Permalink
How is everyone maintaining uptime while keeping current on kernel security
patches for your given distro?
Post by leam hall via Ale
Possibly separating this discussion out into component parts. We'll
see how it goes.
I don't see boot speed as a game changer for systemd, even if it is a
lot faster. If you're booting your desktop then you're probably
already used to "push the power button, hit the head, grab some
coffee" routine. If you system isn't up by then maybe there's an
issue.
For servers, if you really want uptime, why aren't you redundant?
Reboot time is again not an issue if the service stays up.
_______________________________________________
Ale mailing list
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
Damon L. Chesser via Ale
2018-02-17 14:23:52 UTC
Permalink
As a professional, I am not.  Like the man said, if you need uptime, you
designed it wrong.  As a "hobbyist", "enthusiast", or just plain user, I
also don't care about uptime.  go ahead. Reboot your home box.  Even
shut it down when you are not using it and save a few hundred dollars a
year.
Post by Adrya Stembridge via Ale
How is everyone maintaining uptime while keeping current on kernel
security patches for your given distro?
Possibly separating this discussion out into component parts. We'll
see how it goes.
I don't see boot speed as a game changer for systemd, even if it is a
lot faster. If you're booting your desktop then you're probably
already used to "push the power button, hit the head, grab some
coffee" routine. If you system isn't up by then maybe there's an
issue.
For servers, if you really want uptime, why aren't you redundant?
Reboot time is again not an issue if the service stays up.
_______________________________________________
Ale mailing list
http://mail.ale.org/mailman/listinfo/ale
<http://mail.ale.org/mailman/listinfo/ale>
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
<http://mail.ale.org/mailman/listinfo>
_______________________________________________
Ale mailing list
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
--
***@damtek.com
404-271-8699
Joey Kelly via Ale
2018-02-20 00:15:06 UTC
Permalink
As a professional, I am not. Like the man said, if you need uptime, you
designed it wrong. As a "hobbyist", "enthusiast", or just plain user, I
also don't care about uptime. go ahead. Reboot your home box. Even
shut it down when you are not using it and save a few hundred dollars a
year.
Post by leam hall via Ale
I don't see boot speed as a game changer for systemd, even if it is a
lot faster. If you're booting your desktop then you're probably
already used to "push the power button, hit the head, grab some
coffee" routine. If you system isn't up by then maybe there's an
issue.
What's all this "shut down a computer" nonsense? Mine only shuts off in the
rare even the power goes off longer than my UPS stays up.
--
Joey Kelly
Minister of the Gospel and Linux Consultant
http://joeykelly.net
504-239-6550
_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
Jim Kinney via Ale
2018-02-17 15:32:14 UTC
Permalink
Add in virtual machines deployed by the millions and there's a huge need for rapid reboot.
Post by Adrya Stembridge via Ale
How is everyone maintaining uptime while keeping current on kernel security
patches for your given distro?
Post by leam hall via Ale
Possibly separating this discussion out into component parts. We'll
see how it goes.
I don't see boot speed as a game changer for systemd, even if it is a
lot faster. If you're booting your desktop then you're probably
already used to "push the power button, hit the head, grab some
coffee" routine. If you system isn't up by then maybe there's an
issue.
For servers, if you really want uptime, why aren't you redundant?
Reboot time is again not an issue if the service stays up.
_______________________________________________
Ale mailing list
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
--
Sent from my Android device with K-9 Mail. All tyopes are thumb related and reflect authenticity.
leam hall via Ale
2018-02-17 15:41:37 UTC
Permalink
Post by Jim Kinney via Ale
Add in virtual machines deployed by the millions and there's a huge need for rapid reboot.
If you have a few million machines then redundancy better be high.
_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
Steve Litt via Ale
2018-02-18 00:56:15 UTC
Permalink
On Sat, 17 Feb 2018 08:58:22 -0500
Post by leam hall via Ale
Possibly separating this discussion out into component parts. We'll
see how it goes.
I don't see boot speed as a game changer for systemd, even if it is a
lot faster.
If your computer is an entertainment appliance receiving live
broadcasts, a 2 second boot is better than a minute boot. If you're
spinning up hundreds of VMs or containers, the boot time of those
matters. If your boot takes 10 minutes, that's unacceptable.
Otherwise...
Post by leam hall via Ale
If you're booting your desktop then you're probably
already used to "push the power button, hit the head, grab some
coffee" routine. If you system isn't up by then maybe there's an
issue.
Zactly!
Post by leam hall via Ale
For servers, if you really want uptime, why aren't you redundant?
Reboot time is again not an issue if the service stays up.
In addition, just because Steve Litt once experimentally got systemd
to boot in 2 seconds doesn't mean that's the normal state of affairs.
Reports I hear on various mailing lists have healthy systemd systems
booting in about 20 seconds, and unhealthy ones taking two minutes.

But back to your initial question about boot speed being a game change:
Boot speed being a game changer is an existential necessity to the
systemd cabal because systemd's raison d'être is efficiency, and once
the computer is stably up, the init system has little to no effect on
efficiency. The systemd cabal is forced to wow and praise over the boot
speed, all the while saying "and many other things too", because, of
course, you're right: Few care whether boot takes 30 seconds or a
minute, and all too often it's the runit system that takes 30 seconds.

SteveT

Steve Litt
January 2018 featured book: Troubleshooting: Why Bother?
http://www.troubleshooters.com/twb
_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mail
Damon L. Chesser via Ale
2018-02-18 01:16:36 UTC
Permalink
Least you think I am on the systemd side:  I agree with what you wrote. 
My Arch system takes for ever (subjectively) to boot, maby 20 to 30 sec
AFTER I give the encrypted password, then I have to wait what feels like
an eternity to get my NIC to find an IP and initialize (wither or not
static or DHCP is used).  I am soooooo glad systemd fixed that boot
speed issue.  OTHO, eh.  Not really that big of a deal for me to dig
into it and fix it.
Post by Steve Litt via Ale
On Sat, 17 Feb 2018 08:58:22 -0500
Post by leam hall via Ale
Possibly separating this discussion out into component parts. We'll
see how it goes.
I don't see boot speed as a game changer for systemd, even if it is a
lot faster.
If your computer is an entertainment appliance receiving live
broadcasts, a 2 second boot is better than a minute boot. If you're
spinning up hundreds of VMs or containers, the boot time of those
matters. If your boot takes 10 minutes, that's unacceptable.
Otherwise...
Post by leam hall via Ale
If you're booting your desktop then you're probably
already used to "push the power button, hit the head, grab some
coffee" routine. If you system isn't up by then maybe there's an
issue.
Zactly!
Post by leam hall via Ale
For servers, if you really want uptime, why aren't you redundant?
Reboot time is again not an issue if the service stays up.
In addition, just because Steve Litt once experimentally got systemd
to boot in 2 seconds doesn't mean that's the normal state of affairs.
Reports I hear on various mailing lists have healthy systemd systems
booting in about 20 seconds, and unhealthy ones taking two minutes.
Boot speed being a game changer is an existential necessity to the
systemd cabal because systemd's raison d'être is efficiency, and once
the computer is stably up, the init system has little to no effect on
efficiency. The systemd cabal is forced to wow and praise over the boot
speed, all the while saying "and many other things too", because, of
course, you're right: Few care whether boot takes 30 seconds or a
minute, and all too often it's the runit system that takes 30 seconds.
SteveT
Steve Litt
January 2018 featured book: Troubleshooting: Why Bother?
http://www.troubleshooters.com/twb
_______________________________________________
Ale mailing list
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
--
***@damtek.com
404-271-8699

_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.
leam hall via Ale
2018-02-18 01:40:56 UTC
Permalink
Least you think I am on the systemd side: I agree with what you wrote. My
Arch system takes for ever (subjectively) to boot, maby 20 to 30 sec AFTER I
give the encrypted password, then I have to wait what feels like an eternity
to get my NIC to find an IP and initialize (wither or not static or DHCP is
used). I am soooooo glad systemd fixed that boot speed issue. OTHO, eh.
Not really that big of a deal for me to dig into it and fix it.
Hey Damon.

I probably thought that at first, but your point about paying the
bills is valid. If worked switched to a systemd based OS next week I'd
switch and suck it up. And mumble a bit.

I've been wanting to move out of straight Linux admin for a while.
Mostly because mgmt tends to view us as commodities where the
developers are semi-divine and always right. (HA!) It seems like RHEL
is going the way of VMWare and the F5 BigIP; lots of stuff to prevent
the engineer from getting into the system and fixing things. Both
VMWare and the BigIP were on RHEL last time I looked, way under the
hood. There was no expectation that you could get there though.

In one terminal window I've done a git push of some sci-fi I'm
writing. In another I'm working on Ruby code to better collate book
chapters. I have a small whiteboard with an idea for a space merchant
text based game sketched out, at least at the first blush level.

I used to come home and play with sendmail attempts or apache configs.
So far I'm not seeing a lot of "whoo-hoo" when I think about learning
systemd.
_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
Steve Litt via Ale
2018-02-18 02:22:42 UTC
Permalink
On Sat, 17 Feb 2018 20:40:56 -0500
On Sat, Feb 17, 2018 at 8:16 PM, Damon L. Chesser via Ale
Least you think I am on the systemd side: I agree with what you
wrote. My Arch system takes for ever (subjectively) to boot, maby
20 to 30 sec AFTER I give the encrypted password, then I have to
wait what feels like an eternity to get my NIC to find an IP and
initialize (wither or not static or DHCP is used). I am soooooo
glad systemd fixed that boot speed issue. OTHO, eh. Not really
that big of a deal for me to dig into it and fix it.
Hey Damon.
I probably thought that at first, but your point about paying the
bills is valid. If worked switched to a systemd based OS next week I'd
switch and suck it up. And mumble a bit.
Gotta pay the bills, no doubt about it. Systemd is such a mess it
presents a huge opportunity for trainers. If I were offered a systemd
trainer job for the right money, I'd shut up and systemd. I'd even use
it at home, to get home experience with my work systems.

None of which means I'd believe in systemd. I'd just laugh all the way
to the bank. This was the world I lived in when I was a developer in
the Windows world.

But I'd think twice about criticizing the character, ability or opinion
of systemd rejecters. Because in my heart I'd know they're right.

SteveT
_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
Solomon Peachy via Ale
2018-02-17 14:33:31 UTC
Permalink
Post by leam hall via Ale
I don't see boot speed as a game changer for systemd, even if it is a
lot faster. If you're booting your desktop then you're probably
already used to "push the power button, hit the head, grab some
coffee" routine. If you system isn't up by then maybe there's an
issue.
Faster boot times aren't the primary benefit.

The main point is to have a set of fully dependency-resolved set of
services/actions. A side benefit of this is parallelism, which in turn
leads to better boot times.

For example, the night before last I switched my home IPv6 provider
from a Hurricane Electric tunnel to Comcast's native static service.
When I restarted the network interface, systemd automatically restarted
all dependent services, without my needing to do anything else.
Post by leam hall via Ale
For servers, if you really want uptime, why aren't you redundant?
Reboot time is again not an issue if the service stays up.
Think of cloud instances that come and go based on realtime demand. The
faster things start up, the faster they can start serving.

I'm not saying that use case necessarily matters to anyone here, but
I'll take improved boot times all the same.

- Solomon
--
Solomon Peachy pizza at shaftnet dot org
Coconut Creek, FL ^^ (email/xmpp) ^^
Quidquid latine dictum sit, altum videtur.
Steve Litt via Ale
2018-02-18 20:41:23 UTC
Permalink
On Sat, 17 Feb 2018 09:33:31 -0500
Post by Solomon Peachy via Ale
Post by leam hall via Ale
I don't see boot speed as a game changer for systemd, even if it is
a lot faster. If you're booting your desktop then you're probably
already used to "push the power button, hit the head, grab some
coffee" routine. If you system isn't up by then maybe there's an
issue.
Faster boot times aren't the primary benefit.
The main point is to have a set of fully dependency-resolved set of
services/actions.
Daemontools, daemontools-encore, runit and s6 all have the ability to
define a dependency-resolved set of services/actions. You define them
with "if" statements in the run script instead of services, but the
result is the same. Plus, with the daemontools-inspired inits, you can
decide your own definition of a dependency process being "up", so if
you don't like the daemon author's definition of its being "up", you
can make up your own. It's pretty cool.

It also isn't all that necessary. Most run scripts, as they come from
the factory (at least with Void Linux) have no process dependency
checking, and in practice things seem to work just fine. But if one
wants process dependency checking, it simply requires a simple "if"
statement within the dependent process' run script.
Post by Solomon Peachy via Ale
A side benefit of this is parallelism, which in
turn leads to better boot times.
For example, the night before last I switched my home IPv6 provider
from a Hurricane Electric tunnel to Comcast's native static service.
When I restarted the network interface, systemd automatically
restarted all dependent services, without my needing to do anything
else.
Runit can do that. I'm not sure it's a good idea: I'd rather ip link
set dev eth0 down;ip link set dev eth0 up, and same with wlo1. With
such a change, I'd rather fix it up manually. For situations where the
network goes down and back up again, all I can say is my computer
brings back its network connection without the need of having the
network be a service.
Post by Solomon Peachy via Ale
Post by leam hall via Ale
For servers, if you really want uptime, why aren't you redundant?
Reboot time is again not an issue if the service stays up.
Think of cloud instances that come and go based on realtime demand.
The faster things start up, the faster they can start serving.
I'm not saying that use case necessarily matters to anyone here, but
I'll take improved boot times all the same.
I'm sure nobody here begrudges the ability to bring up and down
containers on demand, and have them boot in a couple seconds. I
certainly don't. My use case isn't universal.

Problem is, with systemd's welded together entanglement of large
sections of software with applications and the underlying OS, systemd
completely changes the way you adjust your software, and IMHO not for
the better if you're at all DIY. And systemd is so entangled, you can't
just yank it out and substitute another init. Systemd is the only init
system the preceding sentence is true for. The systemd cabal is saying
that their use case IS universal, and my use case doesn't count.
Because of systemd's entanglement, replacing it is extremely
difficult, leaving people with my use case in a much more difficult
situation. This is why, years after the Debian decision, systemd is so
reviled and fought against.

You never saw this kind of thing with Vim vs Emacs: Neither tried to
weld itself into irreplacibility. You don't see this thing with KDE vs
Gnome: Neither was successful enough in welding itself into
irreplacibility. The last piece of software to generate this level of
antipathy and resistance was Windows.

SteveT

Steve Litt
January 2018 featured book: Troubleshooting: Why Bother?
http://www.troubleshooters.com/twb
_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
Solomon Peachy via Ale
2018-02-19 02:57:30 UTC
Permalink
Post by Steve Litt via Ale
It also isn't all that necessary. Most run scripts, as they come from
the factory (at least with Void Linux) have no process dependency
checking, and in practice things seem to work just fine. But if one
wants process dependency checking, it simply requires a simple "if"
statement within the dependent process' run script.
So... if the parent restarts, who is going to restart the dependents?
Post by Steve Litt via Ale
Runit can do that. I'm not sure it's a good idea: I'd rather ip link
set dev eth0 down;ip link set dev eth0 up, and same with wlo1. With
such a change, I'd rather fix it up manually. For situations where the
network goes down and back up again, all I can say is my computer
brings back its network connection without the need of having the
network be a service.
I'd rather _not_ fix things up manually, by the time I've finished
everyhing it would have been faster (and less disruptive) to just reboot
the system.

Annoyingly there's a big gotcha that I missed -- Google apparently
requires matching IPv6 RDNS entries. I'd set up the HE tunnel
so long ago I forgot that they automatically created those
entries. Meanwhile, 2 days in, and Comcast hasn't fulfiled my ticket.

(DNS had already switched over by the time I'd discovered this, so I
couldn't just revert back. Joy...)
Post by Steve Litt via Ale
Problem is, with systemd's welded together entanglement of large
sections of software with applications and the underlying OS, systemd
completely changes the way you adjust your software, and IMHO not for
the better if you're at all DIY.
You and I draw the "DIY" line at different places.

(I don't administer my own systems for the joy of it; they have specific
jobs to fulfil and I'm too much of a paranoid git to trust my data on
anyone else's systems..)
Post by Steve Litt via Ale
You never saw this kind of thing with Vim vs Emacs: Neither tried to
weld itself into irreplacibility. You don't see this thing with KDE vs
Gnome: Neither was successful enough in welding itself into
irreplacibility.
Emacs's viper-mode is arguably a better vi than vim. :P
Post by Steve Litt via Ale
The last piece of software to generate this level of antipathy and
resistance was Windows.
...Yet for all that antipathy and resistance, windows still easily rules
the [PC] world.

- Solomon
--
Solomon Peachy pizza at shaftnet dot org
Coconut Creek, FL ^^ (email/xmpp) ^^
Quidquid latine dictum sit, altum videtur.
Jerald Sheets via Ale
2018-02-19 15:19:42 UTC
Permalink
So let me interject into this conversational process a “norm” that’s evolving out in silicon valley.

We don’t reboot. When a machine is sufficiently “Sick” that something as “profound” as a reboot is expected to be needed, the instance is terminated and a new node stands up in its place. There’s no such thing as persistent nodes any more in large scale work. YES, there’s a lot of architecture around that, and YES there’s a lot of clustering and session maintenance voodoo around that on the application side. However, not having to keep up with instances any more and/or terminate/replace instead of reboot is every bit as fast as some of these boot times we’re discussing here.

It is my opinion (and prediction) that this will be something of a non-issue over the next several years.

I cannot tell you the number of systems that are either running in a containerized fashion with CoreOS, stripped down operating systems you create yourself with not much more than a boot loader and a single boot environment on the OS to run a single app with a single set of boot scripts for that single app or even micro containers over docker that run nothing but OS libraries and language dependencies required to run the app itself
nothing more
(i.e., no SystemD OR SysV Init at all)

Our long-standing way of doing things is on the move again, folks. This particular conversation would now be classified as a “legacy” conversation. Look to container and lambda-style infrastructure to start taking big chunks of workload “out there”.

While you may be skeptical because of your current environment you care for and feed, one day you may have to leave it and go somewhere else, and what you find at your new role may look nothing like what you’re dealing with today.

My last 3 years (23rd, 24th, and 25th year in the business)has been more education than I encountered the first 3 years of my career.

—j
Post by Solomon Peachy via Ale
Post by Steve Litt via Ale
It also isn't all that necessary. Most run scripts, as they come from
the factory (at least with Void Linux) have no process dependency
checking, and in practice things seem to work just fine. But if one
wants process dependency checking, it simply requires a simple "if"
statement within the dependent process' run script.
So... if the parent restarts, who is going to restart the dependents?
Post by Steve Litt via Ale
Runit can do that. I'm not sure it's a good idea: I'd rather ip link
set dev eth0 down;ip link set dev eth0 up, and same with wlo1. With
such a change, I'd rather fix it up manually. For situations where the
network goes down and back up again, all I can say is my computer
brings back its network connection without the need of having the
network be a service.
I'd rather _not_ fix things up manually, by the time I've finished
everyhing it would have been faster (and less disruptive) to just reboot
the system.
Annoyingly there's a big gotcha that I missed -- Google apparently
requires matching IPv6 RDNS entries. I'd set up the HE tunnel
so long ago I forgot that they automatically created those
entries. Meanwhile, 2 days in, and Comcast hasn't fulfiled my ticket.
(DNS had already switched over by the time I'd discovered this, so I
couldn't just revert back. Joy...)
Post by Steve Litt via Ale
Problem is, with systemd's welded together entanglement of large
sections of software with applications and the underlying OS, systemd
completely changes the way you adjust your software, and IMHO not for
the better if you're at all DIY.
You and I draw the "DIY" line at different places.
(I don't administer my own systems for the joy of it; they have specific
jobs to fulfil and I'm too much of a paranoid git to trust my data on
anyone else's systems..)
Post by Steve Litt via Ale
You never saw this kind of thing with Vim vs Emacs: Neither tried to
weld itself into irreplacibility. You don't see this thing with KDE vs
Gnome: Neither was successful enough in welding itself into
irreplacibility.
Emacs's viper-mode is arguably a better vi than vim. :P
Post by Steve Litt via Ale
The last piece of software to generate this level of antipathy and
resistance was Windows.
...Yet for all that antipathy and resistance, windows still easily rules
the [PC] world.
- Solomon
--
Solomon Peachy pizza at shaftnet dot org
Coconut Creek, FL ^^ (email/xmpp) ^^
Quidquid latine dictum sit, altum videtur.
_______________________________________________
Ale mailing list
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
James Taylor via Ale
2018-02-19 16:08:14 UTC
Permalink
I don't entirely disagree with that observation, but there are still,
and will be a lot of major systems that don't lend themselves well to
that model.
For example, the SAP HANA systems use multiterabyte in memory databases
that don't get rebooted because it can take hours, maybe days to write
back and reload the databases from disk.
That was one of the drivers for live kernel patching that is now
available.
I agree that large containerized server farms (plantations?) are the
norm for consumer cloud space, but I don't think non-ephemeral systems
are going away any time soon.

None of which is relevant to the systemd argument, in any case.
-jt



James Taylor
678-697-9420
So let me interject into this conversational process a “norm”
that’s evolving out in silicon valley.

We don’t reboot. When a machine is sufficiently “Sick” that
something as “profound” as a reboot is expected to be needed, the
instance is terminated and a new node stands up in its place. There’s
no such thing as persistent nodes any more in large scale work. YES,
there’s a lot of architecture around that, and YES there’s a lot of
clustering and session maintenance voodoo around that on the application
side. However, not having to keep up with instances any more and/or
terminate/replace instead of reboot is every bit as fast as some of
these boot times we’re discussing here.

It is my opinion (and prediction) that this will be something of a
non-issue over the next several years.

I cannot tell you the number of systems that are either running in a
containerized fashion with CoreOS, stripped down operating systems you
create yourself with not much more than a boot loader and a single boot
environment on the OS to run a single app with a single set of boot
scripts for that single app or even micro containers over docker that
run nothing but OS libraries and language dependencies required to run
the app itself…nothing more…(i.e., no SystemD OR SysV Init at all)

Our long-standing way of doing things is on the move again, folks.
This particular conversation would now be classified as a “legacy”
conversation. Look to container and lambda-style infrastructure to
start taking big chunks of workload “out there”.

While you may be skeptical because of your current environment you care
for and feed, one day you may have to leave it and go somewhere else,
and what you find at your new role may look nothing like what you’re
dealing with today.

My last 3 years (23rd, 24th, and 25th year in the business)has been
more education than I encountered the first 3 years of my career.

—j
Post by Solomon Peachy via Ale
Post by Steve Litt via Ale
It also isn't all that necessary. Most run scripts, as they come from
the factory (at least with Void Linux) have no process dependency
checking, and in practice things seem to work just fine. But if one
wants process dependency checking, it simply requires a simple "if"
statement within the dependent process' run script.
So... if the parent restarts, who is going to restart the
dependents?
Post by Solomon Peachy via Ale
Post by Steve Litt via Ale
Runit can do that. I'm not sure it's a good idea: I'd rather ip link
set dev eth0 down;ip link set dev eth0 up, and same with wlo1. With
such a change, I'd rather fix it up manually. For situations where the
network goes down and back up again, all I can say is my computer
brings back its network connection without the need of having the
network be a service.
I'd rather _not_ fix things up manually, by the time I've finished
everyhing it would have been faster (and less disruptive) to just reboot
the system.
Annoyingly there's a big gotcha that I missed -- Google apparently
requires matching IPv6 RDNS entries. I'd set up the HE tunnel
so long ago I forgot that they automatically created those
entries. Meanwhile, 2 days in, and Comcast hasn't fulfiled my
ticket.
Post by Solomon Peachy via Ale
(DNS had already switched over by the time I'd discovered this, so I
couldn't just revert back. Joy...)
Post by Steve Litt via Ale
Problem is, with systemd's welded together entanglement of large
sections of software with applications and the underlying OS,
systemd
Post by Solomon Peachy via Ale
Post by Steve Litt via Ale
completely changes the way you adjust your software, and IMHO not for
the better if you're at all DIY.
You and I draw the "DIY" line at different places.
(I don't administer my own systems for the joy of it; they have specific
jobs to fulfil and I'm too much of a paranoid git to trust my data on
anyone else's systems..)
Post by Steve Litt via Ale
You never saw this kind of thing with Vim vs Emacs: Neither tried to
weld itself into irreplacibility. You don't see this thing with KDE vs
Gnome: Neither was successful enough in welding itself into
irreplacibility.
Emacs's viper-mode is arguably a better vi than vim. :P
Post by Steve Litt via Ale
The last piece of software to generate this level of antipathy and
resistance was Windows.
...Yet for all that antipathy and resistance, windows still easily rules
the [PC] world.
- Solomon
--
Solomon Peachy pizza at shaftnet dot
org
Post by Solomon Peachy via Ale
Coconut Creek, FL ^^ (email/xmpp) ^^
Quidquid latine dictum sit, altum videtur.
_______________________________________________
Ale mailing list
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.al
Leam Hall via Ale
2018-02-19 16:24:26 UTC
Permalink
So let me interject into this conversational process a “norm” that’s
evolving out in silicon valley.
Let me render an opinion based on Jerald's comments. Containerization
will change "things" as much as VMWare. SMBs will use a cloud provider
(AWS, Linode). Large enterprises will use thin clients or portable
workstations to connect to their environment. We've probably all seen
the signs.

Griping about systemd feels good but doesn't prepare me for the next
career challenges. So, what to do?

We have, individually and collectively, at least a few choices.

1. Change careers so systemd/boot times/containers don't matter.
2. Hang out with the legacy systems.
3. Join those moving to the "new thing".
4. Help enable the "even newer thing".

I'd say the "new thing" (#3) is cloud and the "even newer thing" is
containers (#4). There are multiple container technologies and the
market is likely to settle on one.

As I look at my career and this list the path forward isn't clear. I get
paid for #2 while #1 has been a long standing option. Would need energy
and a team to do #4 while #3 is a safe forward option. However, "safe"
is relative; they can find younger and cheaper cloud admins.

Thoughts?

Leam

_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listin
Lightner, Jeffrey via Ale
2018-02-19 16:53:27 UTC
Permalink
We are doing containers on our own systems (i.e. not cloud) using CoreOS. CoreOS (the base OS) relies on systemd. I don't know that going to the cloud for containers will eliminate the need to interact with systemd.

I started working with DOS 2.0 and Lotus 123v1A in the early 80s. Later I had to move on to Novell file servers and later yet to AT&T UNIX then various other UNIX flavors and finally to Linux.

I use MS Windows workstations because that is invariably what my employer assigns no matter how much they use open systems for servers. Heck these days I even help my neighbor with her MacBook though I despise Apple's proprietary mindset.

Moving on is something I've done many times in my career and personal life. I'll likely continue to have to do. With every new thing comes the folks that equate it with Satanism and/or communism but all the gnashing of teeth has never halted any of these new things. In the meantime things generally lauded by one and all (e.g. NeXT) seem to have gone by the way side.

Come to the dark side Luke. You know I am your farther...


-----Original Message-----
From: Ale [mailto:ale-***@ale.org] On Behalf Of Leam Hall via Ale
Sent: Monday, February 19, 2018 11:24 AM
To: Atlanta Linux Enthusiasts
Subject: [ale] Gearing up for the future (wuz: boot speed, systemd, vi vs emacs, etc)
So let me interject into this conversational process a “norm” that’s
evolving out in silicon valley.
Let me render an opinion based on Jerald's comments. Containerization will change "things" as much as VMWare. SMBs will use a cloud provider (AWS, Linode). Large enterprises will use thin clients or portable workstations to connect to their environment. We've probably all seen the signs.

Griping about systemd feels good but doesn't prepare me for the next career challenges. So, what to do?

We have, individually and collectively, at least a few choices.

1. Change careers so systemd/boot times/containers don't matter.
2. Hang out with the legacy systems.
3. Join those moving to the "new thing".
4. Help enable the "even newer thing".

I'd say the "new thing" (#3) is cloud and the "even newer thing" is containers (#4). There are multiple container technologies and the market is likely to settle on one.

As I look at my career and this list the path forward isn't clear. I get paid for #2 while #1 has been a long standing option. Would need energy and a team to do #4 while #3 is a safe forward option. However, "safe"
is relative; they can find younger and cheaper cloud admins.

Thoughts?

Leam

_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/list
leam hall via Ale
2018-02-19 17:26:05 UTC
Permalink
Post by Lightner, Jeffrey via Ale
We are doing containers on our own systems (i.e. not cloud) using CoreOS. CoreOS (the base OS) relies on systemd. I don't know that going to the cloud for containers will eliminate the need to interact with systemd.
Yeah, I asked on #CoreOS and they said it required systemd. Since RH
bought CoreOS I'm not sure how Docker will fare. Maybe they will win
the container wars, maybe not. I can see systemd or something like it
being useful for a container host OS. At that point it becomes a
question of "where in the stack I want to work" since I don't have
enough brain cells for the entire stack.

Any insights on what container platform will get the majority of
paying market share?

Leam
_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
James Taylor via Ale
2018-02-19 17:34:04 UTC
Permalink
As far as I know, Docker is just a container management framework.
All of my commercial servers use SUSE, so I don't see moving away from Docker here.
I've never used Red Hat as a primary linux. The closest is some CentOS appliances, which don't require a lot of in-depth OS management.
Looking from outside in, Red Hat seems to be moving more and more towards having their owned tools for things.
Not something that looks comfortable from an outsider perspective.
-jt




James Taylor
678-697-9420
Post by Lightner, Jeffrey via Ale
We are doing containers on our own systems (i.e. not cloud) using CoreOS. CoreOS (the base OS) relies on systemd. I don't know that going to the cloud for containers will eliminate the need to interact with systemd.
Yeah, I asked on #CoreOS and they said it required systemd. Since RH
bought CoreOS I'm not sure how Docker will fare. Maybe they will win
the container wars, maybe not. I can see systemd or something like it
being useful for a container host OS. At that point it becomes a
question of "where in the stack I want to work" since I don't have
enough brain cells for the entire stack.

Any insights on what container platform will get the majority of
paying market share?

Leam
_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo


_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
leam hall via Ale
2018-02-19 17:37:39 UTC
Permalink
On Mon, Feb 19, 2018 at 12:34 PM, James Taylor
Post by James Taylor via Ale
As far as I know, Docker is just a container management framework.
All of my commercial servers use SUSE, so I don't see moving away from Docker here.
Ah, I thought Docker was also the host OS, or sat on one that was
custom crafted for it.
_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
Lightner, Jeffrey via Ale
2018-02-19 17:42:00 UTC
Permalink
RedHat's containerization platform is called Atomic (pre-CoreOS). They moved that to Docker some time ago. I'm pretty sure Atomic and CoreOS will converge. I suspect they bought CoreOS because they weren't getting much of the older Atomic folks to move to the new and the folks that were using CoreOS weren't moving to Atomic at all. We certainly weren't even though we use RHEL extensively for non-container systems.



-----Original Message-----
From: Ale [mailto:ale-***@ale.org] On Behalf Of leam hall via Ale
Sent: Monday, February 19, 2018 12:38 PM
To: Atlanta Linux Enthusiasts
Subject: Re: [ale] Gearing up for the future (wuz: boot speed, systemd, vi vs emacs, etc)
Post by James Taylor via Ale
As far as I know, Docker is just a container management framework.
All of my commercial servers use SUSE, so I don't see moving away from Docker here.
Ah, I thought Docker was also the host OS, or sat on one that was custom crafted for it.
_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
Jerald Sheets via Ale
2018-02-19 17:53:55 UTC
Permalink
I may be at fault for this weirdness. My punctuation was poor. In my original message on the original thread, I said:


"CoreOS, stripped down operating systems you create yourself with not much more than a boot loader and a single boot environment on the OS to run a single app with a single set of boot scripts for that single app or even micro containers over docker that run nothing but OS libraries and language dependencies required to run the app itself
nothing more
”


Change that to:

"CoreOS stripped down operating systems you create yourself with not much more than a boot loader and a single boot environment on the OS to run a single app with a single set of boot scripts for that single app or even micro containers over docker that run nothing but OS libraries and language dependencies required to run the app itself
nothing more
"

One comma and it screws my whole intent.

The guys in engineering are creating custom containers that are specifically torn down and cobbled together from a very small subset of components up to and including only a boot loader, a few scripts, and some OS & Language libraries enough to run the app.

It’s a lot harder to compromise something that has next to nothing you’re expecting to see on it.


:)


—j
Post by leam hall via Ale
Post by Lightner, Jeffrey via Ale
We are doing containers on our own systems (i.e. not cloud) using CoreOS. CoreOS (the base OS) relies on systemd. I don't know that going to the cloud for containers will eliminate the need to interact with systemd.
Yeah, I asked on #CoreOS and they said it required systemd. Since RH
bought CoreOS I'm not sure how Docker will fare. Maybe they will win
the container wars, maybe not. I can see systemd or something like it
being useful for a container host OS. At that point it becomes a
question of "where in the stack I want to work" since I don't have
enough brain cells for the entire stack.
Any insights on what container platform will get the majority of
paying market share?
Leam
_______________________________________________
Ale mailing list
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
Jim Kinney via Ale
2018-02-19 17:08:51 UTC
Permalink
I vote for #1 but I need a backer to start up another brewery :-)
Post by Leam Hall via Ale
Post by Jerald Sheets via Ale
So let me interject into this conversational process a “norm” that’s
evolving out in silicon valley.
Let me render an opinion based on Jerald's comments. Containerization
will change "things" as much as VMWare. SMBs will use a cloud provider
(AWS, Linode). Large enterprises will use thin clients or portable
workstations to connect to their environment. We've probably all seen
the signs.
Griping about systemd feels good but doesn't prepare me for the next
career challenges. So, what to do?
We have, individually and collectively, at least a few choices.
1. Change careers so systemd/boot times/containers don't matter.
2. Hang out with the legacy systems.
3. Join those moving to the "new thing".
4. Help enable the "even newer thing".
I'd say the "new thing" (#3) is cloud and the "even newer thing" is
containers (#4). There are multiple container technologies and the
market is likely to settle on one.
As I look at my career and this list the path forward isn't clear. I get
paid for #2 while #1 has been a long standing option. Would need energy
and a team to do #4 while #3 is a safe forward option. However, "safe"
is relative; they can find younger and cheaper cloud admins.
Thoughts?
Leam
_______________________________________________
Ale mailing list
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
--
Sent from my Android device with K-9 Mail. All tyopes are thumb related and reflect authenticity.
Steve Litt via Ale
2018-02-19 17:50:57 UTC
Permalink
On Mon, 19 Feb 2018 11:24:26 -0500
Post by Leam Hall via Ale
Post by James Taylor via Ale
So let me interject into this conversational process a “norm”
that’s evolving out in silicon valley.
Let me render an opinion based on Jerald's comments. Containerization
will change "things" as much as VMWare. SMBs will use a cloud
provider (AWS, Linode). Large enterprises will use thin clients or
portable workstations to connect to their environment. We've probably
all seen the signs.
Griping about systemd feels good but doesn't prepare me for the next
career challenges. So, what to do?
We have, individually and collectively, at least a few choices.
1. Change careers so systemd/boot times/containers don't
matter. 2. Hang out with the legacy systems.
3. Join those moving to the "new thing".
4. Help enable the "even newer thing".
I'd say the "new thing" (#3) is cloud and the "even newer thing" is
containers (#4). There are multiple container technologies and the
market is likely to settle on one.
As I look at my career and this list the path forward isn't clear. I
get paid for #2 while #1 has been a long standing option. Would need
energy and a team to do #4 while #3 is a safe forward option.
However, "safe" is relative; they can find younger and cheaper cloud
admins.
Thoughts?
One thought. Systemd was not the result of a meritocracy: It was
financed by Redhat, who, as a purveyor of training and consultants, has
everything to gain by a universally more complex GNU/Linux. During the
coup, Redhat must finance the development and troubleshooting of
systemd, and it doesn't come cheap.

I'm pretty sure they expected everyone to accept systemd by late 2015.
I think they might abandon systemd pretty soon, for yet another "new
thing". If my suspicion is true, you can take your systemd learning and
throw it in the trashcan, because it stops having relevance.

I'm a purveyor of books and courses about troubleshooting, so to a
certain extent I can afford to ignore what's going with one little
section of one operating system. You're not in that position: I suggest
you learn a lot about systemd, make money with systemd, but be ready to
jump to the next thing (which might be even worse).

I'd also suggest that, just to keep your perspective on how a computer
works at the lower levels, you make your home computer init with runit
or s6. The combination of systemd knowledge and runit or s6 knowledge
will make you rare, and will frequently enable you to solve problems
others can't.

SteveT
Solomon Peachy via Ale
2018-02-19 20:54:42 UTC
Permalink
Post by Steve Litt via Ale
One thought. Systemd was not the result of a meritocracy: It was
financed by Redhat, who, as a purveyor of training and consultants, has
everything to gain by a universally more complex GNU/Linux. During the
coup, Redhat must finance the development and troubleshooting of
systemd, and it doesn't come cheap.
To paraphrase something you said to me earlier in this thread, your
opinions (and those of others!) don't count as facts.

So, respectfully, [citations needed].

Also, you forget that in a meritocracy, those who do the actual work get
to determine the future. The systemd authors (including many not
actually paid by redhat) put the work in. Nobody else has.

Well, except arguably for Devuan -- they at least put their money where
their mouth was and forked Debian. Unfortunately they haven't actually
put in any effort where it actually matters; that is working with the
various upstreams to maintain and support the non-systemd codepaths that
were barely functional before systemd even came along.

Ironically, for a distro forked to "maintain init system freedom", they
actually provide *less* choice than what they forked from. Their sole
differentiating feature is the outright removal of libsystemd.so from
filesystems; the "alternative" inits that are the raison d'etre for
Devuan aren't supported any better there than upstream Debian.

- Solomon
--
Solomon Peachy pizza at shaftnet dot org
Coconut Creek, FL ^^ (email/xmpp) ^^
Quidquid latine dictum sit, altum videtur.
Steve Litt via Ale
2018-02-20 00:15:31 UTC
Permalink
On Mon, 19 Feb 2018 15:54:42 -0500
Post by Solomon Peachy via Ale
Post by Steve Litt via Ale
One thought. Systemd was not the result of a meritocracy: It was
financed by Redhat, who, as a purveyor of training and consultants,
has everything to gain by a universally more complex GNU/Linux.
During the coup, Redhat must finance the development and
troubleshooting of systemd, and it doesn't come cheap.
To paraphrase something you said to me earlier in this thread, your
opinions (and those of others!) don't count as facts.
So, respectfully, [citations needed].
Alright, I'll withdraw the sentence about meritocracy. It's a no-op
anyway. Every single other thing in my paragraph is a known fact easily
supportable by a quick internet search.

[snip S.Peachy meritocracy clause: If I don't talk about it, I'm not
responding to your taking about it
Post by Solomon Peachy via Ale
those who do the actual work
get to determine the future.
Same exact thing can be said of dictators and clever criminals. And
that future is often short term.
Post by Solomon Peachy via Ale
The systemd authors (including many not
actually paid by redhat) put the work in. Nobody else has.
Of course not. Who would complexify an OS to add a few features, when
those features' benefits could have been added much more simply and
modularly.
Post by Solomon Peachy via Ale
Well, except arguably for Devuan -- they at least put their money
where their mouth was and forked Debian. Unfortunately they haven't
actually put in any effort where it actually matters; that is working
with the various upstreams to maintain and support the non-systemd
codepaths that were barely functional before systemd even came along.
Yeah, well, they didn't have Redhat's billions behind them when they
negotiated with the upstreams. They had day jobs, many of which were
negatively impacted by systemd. But anyway, the word "codepaths" isn't
defined in dictionary.com, the Urban Dictionary, acronymfinder.com,
or a generic web search, so unless by "codepaths" you mean the sysvinit
start and stop scripts, I doubt there was anything barely functional
pre-systemd, and once again, there were and are plenty of init systems
that don't use those start and stop scripts.
Post by Solomon Peachy via Ale
Ironically, for a distro forked to "maintain init system freedom",
they actually provide *less* choice than what they forked from.
The preceding is simply not true. You can easily run any init system
*except* systemd on Devuan. Running runit, s6, or Epoch on Debian is
crazily difficult: I know, I've done it.
Post by Solomon Peachy via Ale
Their sole differentiating feature is the outright removal of
libsystemd.so from filesystems; the "alternative" inits that are the
raison d'etre for Devuan aren't supported any better there than
upstream Debian.
Simply not true. Devuan removes the tight weldings making it insanely
difficult to lay down an alternative init, so you can install pretty
much any simple init system including runit, s6, Epoch, or BusyBox init.
Meanwhile, I wouldn't want to bet my business plan on Debian keeping
their sysvinit package and their OpenRC package functional as time goes
on.

SteveT
_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
Solomon Peachy via Ale
2018-02-20 15:22:40 UTC
Permalink
Post by Steve Litt via Ale
Yeah, well, they didn't have Redhat's billions behind them when they
negotiated with the upstreams. They had day jobs, many of which were
They didn't negotiate squat. They stomped off in a huff, and haven't so
much as attempted to submit a single patch to anything. Least of all
Debian. Which isn't exactly rolling around in the cash either.
Post by Steve Litt via Ale
negatively impacted by systemd. But anyway, the word "codepaths" isn't
defined in dictionary.com, the Urban Dictionary, acronymfinder.com,
or a generic web search, so unless by "codepaths" you mean the sysvinit
start and stop scripts, I doubt there was anything barely functional
pre-systemd, and once again, there were and are plenty of init systems
that don't use those start and stop scripts.
And are you seriously pulling out dictionary definitions? What do you
think this is, a high school debate? I suggest you look up the
definition of "jargon". Any half-competent software developer will know
exactly what that 'codepaths' means.

But I digress -- What I'm referring to has zero to do with "init
scripts"; instead I'm referring to logind vs Consolekit for managing
desktop user sessions, that is one of "embrace extend extinguish" that
the "systemd cabal" is repeatedly accused of doing.

ConsoleKit is a festering pile of swill, was one hack piled on top of
another (with unique per-distro and desktop environment code, I might
add), and those depending on it (gnome and kde, plus nearly every distro
of note) dropped it like a hot potato because it was so awful and its
promise rewrite (aka ConsoleKit2) had yet to materialize.

It's telling that current development efforts are along the lines of
re-implenenting logind's dbus api (eg elogind or logind-shim) with
varying amounts of functionality rather than attempting to fix
ConsoleKit. (And for all of Devuan's hand waving about "init freedom"
the actual work on the likes of elogind, eudev, and whatnot are being
done by Gentoo developers who are doing actually useful work. Devuan
just got smacked for implying otherwise, BTW.
Post by Steve Litt via Ale
The preceding is simply not true. You can easily run any init system
*except* systemd on Devuan. Running runit, s6, or Epoch on Debian is
crazily difficult: I know, I've done it.
How does Devuan make using runint, s6, or epoch any easier over Debian?

(Granted, a week ago they modified d-i to allow for more selections at
installation time, but that's not in the wild yet.)
Post by Steve Litt via Ale
Simply not true. Devuan removes the tight weldings making it insanely
difficult to lay down an alternative init, so you can install pretty
much any simple init system including runit, s6, Epoch, or BusyBox init.
Meanwhile, I wouldn't want to bet my business plan on Debian keeping
their sysvinit package and their OpenRC package functional as time goes
on.
What weldings are these? Note that Devuan is 99.44% Debian; as I write
this there are only 184 (source) packages different out of 28,036.
Under the hood, the overwhelming majority of the changes consisted of
branding (Debian->Devuan) and removing any vistages of systemd, eg
compile-time options to disable systemd integration, unit files,
and so forth.

Unless by "Weldings" you're referring to Debian's famous attention to
detail where they try to ensure every option is fully supported
throughout the entire system (including across upgrades). Relaxing
quality standards isn't exactly something to brag about.

- Solomon
--
Solomon Peachy pizza at shaftnet dot org
Coconut Creek, FL ^^ (email/xmpp) ^^
Quidquid latine dictum sit, altum videtur.
DJ-Pfulio via Ale
2018-02-19 19:15:54 UTC
Permalink
Looks to me like Silicon Valley is the enemy. We know we can't trust
insta-goo-tweet-book with our data. I don't trust most smartphone apps
that want network access or access to contacts or local files or
calendars or ... anything they aren't directly supposed to make use of
either.

All that containers do is add more choice to the possible solutions mix.

Many businesses will NEVER deploy a container, ever.

Just like some businesses haven't deployed a single VM, ever. Shocking,
I know.

Either the decision makers don't buy into the hype or they are just too
small to want the extra complexities or they are too small to care about
going from 20 systems to 3. Businesses like that do exist.

There are 18,500 large businesses in the USA.
There are 28 MILLION small businesses in the USA. 22 million are single
person businesses. Over 50% of the single-person shops are are over 50
yrs old. Doubt most of them will deploy containers, ever.

Huge computing organizations will probably find a way to use containers
for all the reasons containers earned their hype. As you work down the
number of systems/applications needed, the chances for container use are
less and less and less, until they make very little sense to a 20 person
company who is smart enough NOT to put their customer data in "da cloud."

People and companies in "da cloud" have a vested business interest in
saying I worry too much and I'm a dinosaur. They want our money. They
want to be efficient more than they want to be secure. Fine.

I've learned from mistakes made in the past when data I didn't consider
sensitive got out AND was abused. I may be old, but I do learn.
So let me interject into this conversational process a “norm” that’s
evolving out in silicon valley.
_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.
Jerald Sheets via Ale
2018-02-19 19:55:11 UTC
Permalink
Looks to me like Silicon Valley is the enemy
. I don't trust most smartphone apps
that want network access or access to contacts or local files or
calendars or ... anything they aren't directly supposed to make use of
either...
I've learned from mistakes made in the past when data I didn't consider
sensitive got out AND was abused. I may be old, but I do learn.
Ummm
 none of this was about you.


I’m telling you what is evolving out of the valley, not what you should do. I’m giving some insight as to what is being baked into standard operating procedures, being included into site reliability engineering, DevOps practices, and security departments and shared among the big company IT and security departments that WILL eventually make its way down to the company on the corner of any appreciable size because of the pervasive nature of actual best practices applied to our discipline.

I’m taking my time to let you guys and gals out there know some of the things that are bringing the highest salaries (many in excess of 150k RIGHT HERE in Atlanta), and the technologies and skillsets necessary to get them. As many on this very group will attest, I have assisted some in getting interviews, jobs, training, resources, etc. I’m trying to help as much as I can, and the thrust of what I deal with as part of my job was germane to the direction of the conversation.

If you want no part of it, think you know better, have all the answers, etc., that’s fine, but keep your condescending ingratitude to yourself.


—j
Steve Litt via Ale
2018-02-19 17:27:05 UTC
Permalink
On Sun, 18 Feb 2018 21:57:30 -0500
Post by Solomon Peachy via Ale
Post by Steve Litt via Ale
It also isn't all that necessary. Most run scripts, as they come
from the factory (at least with Void Linux) have no process
dependency checking, and in practice things seem to work just fine.
But if one wants process dependency checking, it simply requires a
simple "if" statement within the dependent process' run script.
So... if the parent restarts, who is going to restart the dependents?
I was discussing dependent daemons that depend on a dependency daemon.
When one or both crash, Runit restarts it/them within 5 seconds.

Now you're talking about parents and what, dependents, children? You
mean like Apache, which forks a process for each connection? With
software I've seen, the parent cleans up the children. If the parent
crashes, it sends a hup or something to the children, which should
cause them to shut down. With the software I've seen, if a daemon forks
processes, it's not the init system's business to track them, kill or
restart any "children."

Anyway, a runit run script can run a background process and then exec to
the daemon to be run, or it can send itself a message to start a
different daemon before starting the one it's supposed to start. It's a
shellscript: The possibilities are infinite.

And before someone else brings it up, it's a very different kind of
shellscript than the 300 line S12_my_daemon_five_event_script thingys.
I don't think I ever saw a run script more than 15 lines long, and most
are under 7.
Post by Solomon Peachy via Ale
Post by Steve Litt via Ale
Runit can do that. I'm not sure it's a good idea: I'd rather ip link
set dev eth0 down;ip link set dev eth0 up, and same with wlo1. With
such a change, I'd rather fix it up manually. For situations where
the network goes down and back up again, all I can say is my
computer brings back its network connection without the need of
having the network be a service.
I'd rather _not_ fix things up manually, by the time I've finished
everyhing it would have been faster (and less disruptive) to just
reboot the system.
Oh come mon man:

ip link set dev eth0 down
ip link set dev eth0 up
ip link set dev wl01 down
ip link set dev wl01 up

Put em in a shellscript. While you're at it, end the shellscript with a
test for upness:

ip addr list



[snip]
Post by Solomon Peachy via Ale
Post by Steve Litt via Ale
Problem is, with systemd's welded together entanglement of large
sections of software with applications and the underlying OS,
systemd completely changes the way you adjust your software, and
IMHO not for the better if you're at all DIY.
You and I draw the "DIY" line at different places.
Geez, I never noticed that. I bet nobody else did either :-)

You and I obviously draw *a lot* of lines in different places.
Post by Solomon Peachy via Ale
(I don't administer my own systems for the joy of it; they have
specific jobs to fulfil and I'm too much of a paranoid git to trust
my data on anyone else's systems..)
Who in the world administers their systems for the joy of it? Nobody I
know. DIY people administer their systems to make their systems do
things how *they* want them done. To mold their computer to their use
case, whether they type 120 WPM or 5 WPM. Is their vision 20/10, or
20/60 corrected? Do they think logically like a computer, or flightily
like a human? Do you have breaks in your workday that neatly
accommodate that 5 minute process, or do you have to go full out every
minute of the day? Each one of these things influences your optimal
user interface, so you change your interface via DIY.

Some people choose a distro to accommodate their workflow. This becomes
problematic when different distros have various of your optimal user
interface elements. Who can forget the blind exodus from Ubuntu when
Ubuntu replaced Gnome with (urk) Unity? DIY people just changed the
WM/DE (Window manager/Desktop environment) and moved on.

I spent 2 days making a home-grown hierarchical menu program in
1999, and have used it as part of my workflow ever since, including
quickly bolting on a menu interface to an executable program with very
complex command line args. DIY people do this: They replace repetitive
riffs with menus, shellscripts, whatever. A little more work up front,
a lot more relaxation going forward.

And in their travels, DIY people learn the value of simple modules
connected only on a need to know basis via simple, small and well
documented interfaces. I'm not going to explain it to you: I learned it
when I used to fix electronics for a living: Systems that are easily
separated are easily troubleshot, easily repaired, and easily modified.
DIY people don't put up with units interconnected every which way by
thick, complex tentacles or even thick, go everywhere busses.

DIY isn't for everyone. It takes a little extra work up front. It takes
certain skills not everyone has. It invariably runs afoul of what's
popular, because features and brokenness drive new purchases: DIY cuts
into profits.

SteveT
_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
Solomon Peachy via Ale
2018-02-19 20:40:01 UTC
Permalink
Post by Steve Litt via Ale
ip link set dev eth0 down
ip link set dev eth0 up
ip link set dev wl01 down
ip link set dev wl01 up
Tht restarts the *interface*, not the services that need kicking because
their public addresses changed on them. (most stuff doesn't listen for
the netlink event that are broadcast when interface properties change.
Which is a non-portable linux-ism anyway..)
Post by Steve Litt via Ale
DIY isn't for everyone. It takes a little extra work up front. It takes
certain skills not everyone has. It invariably runs afoul of what's
popular, because features and brokenness drive new purchases: DIY cuts
into profits.
Opportunity cost is also a very real thing. Just be honest about *why*
you're DIY'ing,a nd one tends to find they'd rather draw the line
differently.

- Solomon
--
Solomon Peachy pizza at shaftnet dot org
Coconut Creek, FL ^^ (email/xmpp) ^^
Quidquid latine dictum sit, altum videtur.
Loading...