Discussion:
[ale] shared research server help
Todor Fassl
2017-10-04 22:32:32 UTC
Permalink
I manage a group of research servers for grad students at a university.
The grad students use these machines to do the research for their Ph.D
theses. The problem is that they pretty regularly kill off each other's
programs by using up all the ram. Most of the machines have 256G of ram.
One kid uses 200Gb and another 100Gb and one or the other, often both,
die. Sometimes they bringthe machines down by hogging the cpu or using
up all the ram. Well, the machines never crash but they might as well be
down.

We really, really don't want to force them to use a scheduling system
like slurm. They are just learnng and they might run the same piece of
code 20 times in an hour.

Is there a way to set a limit on the amount of ram all of a user's
processes can use? If so, we were thinking of setting it at 50% of the
on-board ram. Then it would take 3 students together to trash a machine.
It might still happen but it would be a lot more infrequent.

Any other suggestions? Anything at all? Just keep in mind that we really
want to keep it easy for the students to play around.
--
Todd
_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
Jim Kinney
2017-10-04 22:48:59 UTC
Permalink
ulimit is a way to set soft and hard limits on resource usage including
RAM consumed.
Post by Todor Fassl
I manage a group of research servers for grad students at a university.
The grad students use these machines to do the research for their Ph.D
theses. The problem is that they pretty regularly kill off each other's
programs by using up all the ram. Most of the machines have 256G of ram.
One kid uses 200Gb and another 100Gb and one or the other, often both,
die. Sometimes they bringthe machines down by hogging the cpu or using
up all the ram. Well, the machines never crash but they might as well be
down.
We really, really don't want to force them to use a scheduling system
like slurm. They are just learnng and they might run the same piece of
code 20 times in an hour.
Is there a way to set a limit on the amount of ram all of a user's
processes can use? If so, we were thinking of setting it at 50% of the
on-board ram. Then it would take 3 students together to trash a machine.
It might still happen but it would be a lot more infrequent.
Any other suggestions? Anything at all? Just keep in mind that we really
want to keep it easy for the students to play around.
Chuck Payne
2017-10-05 01:12:02 UTC
Permalink
WOW, I guess I am old, I remember back in college, I had to schedule time
on the mainframe, I hate that machine but I had to use for my business
clas. If it were an Apple II or Tandy 1000, as long as lab was open, we
could use it. Just on the Tandy, had to remember the park command when
powering them down. DOS 2.5.

I was so happy I had my little Atari ST 1040, though I should have
seriously study C at the pick, instead of playing Phatasy 3.
Post by Jim Kinney
ulimit is a way to set soft and hard limits on resource usage including
RAM consumed.
I manage a group of research servers for grad students at a university.
The grad students use these machines to do the research for their Ph.D
theses. The problem is that they pretty regularly kill off each other's
programs by using up all the ram. Most of the machines have 256G of ram.
One kid uses 200Gb and another 100Gb and one or the other, often both,
die. Sometimes they bringthe machines down by hogging the cpu or using
up all the ram. Well, the machines never crash but they might as well be
down.
We really, really don't want to force them to use a scheduling system
like slurm. They are just learnng and they might run the same piece of
code 20 times in an hour.
Is there a way to set a limit on the amount of ram all of a user's
processes can use? If so, we were thinking of setting it at 50% of the
on-board ram. Then it would take 3 students together to trash a machine.
It might still happen but it would be a lot more infrequent.
Any other suggestions? Anything at all? Just keep in mind that we really
want to keep it easy for the students to play around.
_______________________________________________
Ale mailing list
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
--
Terror PUP a.k.a
Chuck "PUP" Payne
-----------------------------------------
Discover it! Enjoy it! Share it! openSUSE Linux.
-----------------------------------------
openSUSE -- Terrorpup
openSUSE Ambassador/openSUSE Member
skype,twiiter,identica,friendfeed -- terrorpup
freenode(irc) --terrorpup/lupinstein
Register Linux Userid: 155363

Have you tried SUSE Studio? Need to create a Live CD, an app you want to
package and distribute , or create your own linux distro. Give SUSE Studio
a try.
Pete Hardie
2017-10-05 01:27:08 UTC
Permalink
I'm sure there are more than a few of us here that remember using punch
cards
Post by Chuck Payne
WOW, I guess I am old, I remember back in college, I had to schedule time
on the mainframe, I hate that machine but I had to use for my business
clas. If it were an Apple II or Tandy 1000, as long as lab was open, we
could use it. Just on the Tandy, had to remember the park command when
powering them down. DOS 2.5.
I was so happy I had my little Atari ST 1040, though I should have
seriously study C at the pick, instead of playing Phatasy 3.
Post by Jim Kinney
ulimit is a way to set soft and hard limits on resource usage including
RAM consumed.
I manage a group of research servers for grad students at a university.
The grad students use these machines to do the research for their Ph.D
theses. The problem is that they pretty regularly kill off each other's
programs by using up all the ram. Most of the machines have 256G of ram.
One kid uses 200Gb and another 100Gb and one or the other, often both,
die. Sometimes they bringthe machines down by hogging the cpu or using
up all the ram. Well, the machines never crash but they might as well be
down.
We really, really don't want to force them to use a scheduling system
like slurm. They are just learnng and they might run the same piece of
code 20 times in an hour.
Is there a way to set a limit on the amount of ram all of a user's
processes can use? If so, we were thinking of setting it at 50% of the
on-board ram. Then it would take 3 students together to trash a machine.
It might still happen but it would be a lot more infrequent.
Any other suggestions? Anything at all? Just keep in mind that we really
want to keep it easy for the students to play around.
_______________________________________________
Ale mailing list
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
--
Terror PUP a.k.a
Chuck "PUP" Payne
-----------------------------------------
Discover it! Enjoy it! Share it! openSUSE Linux.
-----------------------------------------
openSUSE -- Terrorpup
openSUSE Ambassador/openSUSE Member
skype,twiiter,identica,friendfeed -- terrorpup
freenode(irc) --terrorpup/lupinstein
Register Linux Userid: 155363
Have you tried SUSE Studio? Need to create a Live CD, an app you want to
package and distribute , or create your own linux distro. Give SUSE Studio
a try.
_______________________________________________
Ale mailing list
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
--
Pete Hardie
--------
Better Living Through Bitmaps
DJ-Pfulio
2017-10-05 10:32:41 UTC
Permalink
No guessing, Chuck. We are old.

In my school, we had to sign up for seats at a computer. There were 4
computers and 80+ students. Getting 2 hrs a week was difficult, since
only 1 kid in school had a computer at home.
The business department had 25 $3000 computers that were used to teach
typing. Nothing else. Really pissed me off.

Compiles for "hello world" were 45 minutes. Links were also 45 minutes.

At my first college, the computer that ran programs was in a different
building from where any card punch machines. There was usually a 15
person line to hand your deck to the operator ... for the 30 seconds it
took to run my programs - or more likely - get an error printout over
some stupid mistake.

At the next college, we had CDC mainframes with terminals spread all
over campus and dial-up capabilities. 1200 baud rocked, but the
disconnections sucked.
Post by Chuck Payne
WOW, I guess I am old, I remember back in college, I had to schedule
time on the mainframe, I hate that machine but I had to use for my
business clas. If it were an Apple II or Tandy 1000, as long as lab was
open, we could use it. Just on the Tandy, had to remember the park
command when powering them down. DOS 2.5. 
I was so happy I had my little Atari ST 1040, though I should have
seriously study C at the pick, instead of playing Phatasy 3. 
ulimit is a way to set soft and hard limits on resource usage
including RAM consumed.
Post by Todor Fassl
I manage a group of research servers for grad students at a university.
The grad students use these machines to do the research for their Ph.D
theses. The problem is that they pretty regularly kill off each other's
programs by using up all the ram. Most of the machines have 256G of ram.
One kid uses 200Gb and another 100Gb and one or the other, often both,
die. Sometimes they bringthe machines down by hogging the cpu or using
up all the ram. Well, the machines never crash but they might as well be
down.
We really, really don't want to force them to use a scheduling system
like slurm. They are just learnng and they might run the same piece of
code 20 times in an hour.
Is there a way to set a limit on the amount of ram all of a user's
processes can use? If so, we were thinking of setting it at 50% of the
on-board ram. Then it would take 3 students together to trash a machine.
It might still happen but it would be a lot more infrequent.
Any other suggestions? Anything at all? Just keep in mind that we really
want to keep it easy for the students to play around.
_______________________________________________
Ale mailing list
http://mail.ale.org/mailman/listinfo/ale
<http://mail.ale.org/mailman/listinfo/ale>
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
<http://mail.ale.org/mailman/listinfo>
--
Terror PUP a.k.a
Chuck "PUP" Payne
-----------------------------------------
Discover it! Enjoy it! Share it! openSUSE Linux.
-----------------------------------------
openSUSE -- Terrorpup
openSUSE Ambassador/openSUSE Member
skype,twiiter,identica,friendfeed -- terrorpup
freenode(irc) --terrorpup/lupinstein
Register Linux Userid: 155363
 
Have you tried SUSE Studio? Need to create a Live CD,  an app you want
to package and distribute , or create your own linux distro. Give SUSE
Studio a try.
_______________________________________________
Ale mailing list
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
--
Got Linux? Used on smartphones, tablets, desktop computers, media
centers, and servers by kids, Moms, Dads, grandparents and IT
professionals.

_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/l
Jim Kinney
2017-10-05 11:52:15 UTC
Permalink
Back to the original issue:

A tool like torque or slurm is really your best solution to intensive shared resources. It prevents 2 big jobs from eating the same machine and can also encourage users to code better to manage resources better so they can run more jobs.

I have the same problem. One heavy gpu machine (4 tesla P100) only has 64 G ram. Student tried to load in 200+G of data into ram.

A few crashes later he can run 2 jobs at once, each only eats 30G ram and one p100.
Post by Todor Fassl
I manage a group of research servers for grad students at a university.
The grad students use these machines to do the research for their Ph.D
theses. The problem is that they pretty regularly kill off each other's
programs by using up all the ram. Most of the machines have 256G of ram.
One kid uses 200Gb and another 100Gb and one or the other, often both,
die. Sometimes they bringthe machines down by hogging the cpu or using
up all the ram. Well, the machines never crash but they might as well be
down.
We really, really don't want to force them to use a scheduling system
like slurm. They are just learnng and they might run the same piece of
code 20 times in an hour.
Is there a way to set a limit on the amount of ram all of a user's
processes can use? If so, we were thinking of setting it at 50% of the
on-board ram. Then it would take 3 students together to trash a
machine.
It might still happen but it would be a lot more infrequent.
Any other suggestions? Anything at all? Just keep in mind that we really
want to keep it easy for the students to play around.
--
Todd
_______________________________________________
Ale mailing list
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
--
Sent from my Android device with K-9 Mail. All tyopes are thumb related and reflect authenticity.
DJ-Pfulio
2017-10-05 12:26:31 UTC
Permalink
I use taskspooler to manage computer batch workloads, but don't know how
to force other users to use it.
https://www.linux.com/news/queuing-tasks-batch-execution-task-spooler
Post by Jim Kinney
A tool like torque or slurm is really your best solution to intensive
shared resources. It prevents 2 big jobs from eating the same machine
and can also encourage users to code better to manage resources better
so they can run more jobs.
I have the same problem. One heavy gpu machine (4 tesla P100) only has
64 G ram. Student tried to load in 200+G of data into ram.
A few crashes later he can run 2 jobs at once, each only eats 30G ram and one p100.
I manage a group of research servers for grad students at a university.
The grad students use these machines to do the research for their Ph.D
theses. The problem is that they pretty regularly kill off each other's
programs by using up all the ram. Most of the machines have 256G of ram.
One kid uses 200Gb and another 100Gb and one or the other, often both,
die. Sometimes they bringthe machines down by hogging the cpu or using
up all the ram. Well, the machines never crash but they might as well be
down.
We really, really don't want to force them to use a scheduling system
like slurm. They are just learnng and they might run the same piece of
code 20 times in an hour.
Is there a way to set a limit on the amount of ram all of a user's
processes can use? If so, we were thinking of setting it at 50% of the
on-board ram. Then it would take 3 students together to trash a machine.
It might still happen but it would be a lot more infrequent.
Any other suggestions? Anything at all? Just keep in mind that we really
want to keep it easy for the students to play around.
--
Sent from my Android device with K-9 Mail. All tyopes are thumb related
and reflect authenticity.
_______________________________________________
Ale mailing list
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
--
Got Linux? Used on smartphones, tablets, desktop computers, media
centers, and servers by kids, Moms, Dads, grandparents and IT
professionals.
_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
Jerald Sheets
2017-10-05 13:24:51 UTC
Permalink
A tool like torque or slurm

But
 we all know where Slurm comes from!
Todor Fassl
2017-10-05 13:27:00 UTC
Permalink
Right, Jim, another aspect of this problem is that most of the students
don't even realize they need to be careful, much less how to be careful.
"What? Is there a problem with me asking for 500 gigabytes of ram?"
Well, the machine has only 256. But I'm just the IT guy and it's not my
place to demand that these students demonstrate a basic understanding of
sharing resources before getting started. The instructors would never go
for that. I am pretty much stuck providing that informally on a
one-to-one basis. But I think it would be valuable for me to work on
automating that somehow. Pointers to the wiki, stuff like that.

Somebody emailled me off list and made a really good point. The key, I
think is information. Well, that and peer pressure. I know nagios can
trigger an alert when a machine runs low on ram or cpu cycles. It might
even be able to determine who is running the procs that are causing it.
I can at least put all the users in a nagios group and send them alerts
when a research server is near an OOM event. I'll have to see what kind
of granularity I can get out of nagios and experiment with who gets
notified. I can do things like keep widening the group that gets
notified of an event if the original setup turns out to be ineffective.

This list has really come through for me again just with ideas I can
bounce around. I'll have to tread lightly though. About a year ago, I
configured the machines in our shared labs to log someone off after 15
minutes of inactivity. Believe it or not, that was controversial. Not
with the faculty but with the students using the labs. It was an easy
win for me but some of the students went to the faculty with complaints.
Wait, you're actually defending your right to walk away from a
workstation in a public place still logged in? In a way that's not such
a bad thing. This is a university and the students should run the place.
But they need a referee.
Post by Jim Kinney
A tool like torque or slurm is really your best solution to intensive
shared resources. It prevents 2 big jobs from eating the same machine
and can also encourage users to code better to manage resources better
so they can run more jobs.
I have the same problem. One heavy gpu machine (4 tesla P100) only has
64 G ram. Student tried to load in 200+G of data into ram.
A few crashes later he can run 2 jobs at once, each only eats 30G ram and one p100.
I manage a group of research servers for grad students at a university.
The grad students use these machines to do the research for their Ph.D
theses. The problem is that they pretty regularly kill off each other's
programs by using up all the ram. Most of the machines have 256G of ram.
One kid uses 200Gb and another 100Gb and one or the other, often both,
die. Sometimes they bringthe machines down by hogging the cpu or using
up all the ram. Well, the machines never crash but they might as well be
down.
We really, really don't want to force them to use a scheduling system
like slurm. They are just learnng and they might run the same piece of
code 20 times in an hour.
Is there a way to set a limit on the amount of ram all of a user's
processes can use? If so, we were thinking of setting it at 50% of the
on-board ram. Then it would take 3 students together to trash a machine.
It might still happen but it would be a lot more infrequent.
Any other suggestions? Anything at all? Just keep in mind that we really
want to keep it easy for the students to play around.
--
Sent from my Android device with K-9 Mail. All tyopes are thumb related
and reflect authenticity.
--
Todd
_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
Lightner, Jeffrey
2017-10-05 14:25:07 UTC
Permalink
It doesn't just happen with students.

A few years ago I worked at a big network gear maker. We had multiple test/dev, staging and production environments. Right after I started one of the first things they assigned to me was determining which of 2 environment groups was using more resources on their shared server. Since it was HP-UX I was able to setup data captures based on environments in Glance/MeasureWare. A day later I was able to send graphs showing the 1 environment group was using 95% of the resources.

Graphs help to impress the untrained so much more than detailed analysis and you telling them the problem. Being able to quickly give them an answer for what had apparently been a long running argument was one of the many things that made them ask the headhunter for a person just like me when I left to return to Atlanta.

One thing that occurred to me on your original question was the idea of giving students their own virtual machines. You can assign vcpus, storage and RAM to virtuals so that students couldn't exceed what had been assigned. Of course I've not worked with slurm or other resource limiting tools on Linux (other than ulimits as mentioned by someone else).

-----Original Message-----
From: ale-***@ale.org [mailto:ale-***@ale.org] On Behalf Of Todor Fassl
Sent: Thursday, October 05, 2017 9:27 AM
To: Jim Kinney; Atlanta Linux Enthusiasts
Subject: Re: [ale] shared research server help

Right, Jim, another aspect of this problem is that most of the students don't even realize they need to be careful, much less how to be careful.
"What? Is there a problem with me asking for 500 gigabytes of ram?"
Well, the machine has only 256. But I'm just the IT guy and it's not my place to demand that these students demonstrate a basic understanding of sharing resources before getting started. The instructors would never go for that. I am pretty much stuck providing that informally on a one-to-one basis. But I think it would be valuable for me to work on automating that somehow. Pointers to the wiki, stuff like that.

Somebody emailled me off list and made a really good point. The key, I think is information. Well, that and peer pressure. I know nagios can trigger an alert when a machine runs low on ram or cpu cycles. It might even be able to determine who is running the procs that are causing it.
I can at least put all the users in a nagios group and send them alerts when a research server is near an OOM event. I'll have to see what kind of granularity I can get out of nagios and experiment with who gets notified. I can do things like keep widening the group that gets notified of an event if the original setup turns out to be ineffective.

This list has really come through for me again just with ideas I can bounce around. I'll have to tread lightly though. About a year ago, I configured the machines in our shared labs to log someone off after 15 minutes of inactivity. Believe it or not, that was controversial. Not with the faculty but with the students using the labs. It was an easy win for me but some of the students went to the faculty with complaints.
Wait, you're actually defending your right to walk away from a workstation in a public place still logged in? In a way that's not such a bad thing. This is a university and the students should run the place.
But they need a referee.





_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
Jerald Sheets
2017-10-05 14:37:04 UTC
Permalink
This list has really come through for me again just with ideas I can bounce around. I'll have to tread lightly though. About a year ago, I configured the machines in our shared labs to log someone off after 15 minutes of inactivity. Believe it or not, that was controversial. Not with the faculty but with the students using the labs. It was an easy win for me but some of the students went to the faculty with complaints. Wait, you're actually defending your right to walk away from a workstation in a public place still logged in? In a way that's not such a bad thing. This is a university and the students should run the place. But they need a referee.
TL;DR: I disagree. Follow compliance guidelines. People could go to jail in a governmental (college) setting. The laws are the referee, not us.



I respectfully disagree.

Let me flex my ego muscle for a moment: “Jerald Sheets, Lead Security Architect - Infrastructure Hardening - PayPal, Inc” at your service.


Any university is bound by common rules and guidelines to adhere to “generally accepted security standards” in regards to systems connected to the university infrastructure for the purposes of satisfying protection of PII of students, faculty, and staff. This is also governed by FRPA and can include jail time under the right circumstances
 Just saying.

Regardless of physical, network, and other similar security controls in place, host security demands that TMOUT be on, enabled, and unable to be circumvented.




HIPAA:

A covered entity should activate a password-protected screensaver that automatically prevents unauthorized users from viewing or accessing electronic protected health information from unattended electronic information system devices.

SOX:

User session timeout is defined and in place for authorized users Audit and review user privil

ITIL:

ISO/IEC 27002
11.5.5 Session time-out

PCI:

8.5.15 Idle Session Timeout threshold

Disconnect Idle session is less than or equal to 15 minutes



These are the reasonings I use with CISOs, CIOs and CEOs to explain to them that just because Devs are yelling about various security controls does not mean they get to have what they want.

“You could go to jail over this one” has been uttered more than once by me, and I count it a great responsibility on my part as a Security guy first and a Systems guy second to ensure that the law and various compliance guidelines are followed.

If I have people continuing to demand non-compliance, I work up a business case for them to sign their name to “when the auditors come around on this, they’ll want to know who’s responsible and who should be prosecuted in the event of data loss or breach”. That usually gets their attention and gets them to think twice about these things.

The balance is ALWAYS security versus usability, and the security guidelines we are both legally and honor-bound to follow are the “referee” here. You aren’t. I implement what’s given to me within the confines of the law, and I do not step outside of it. No student, teacher, full or associate professor, department chair has the authority to overrule these guidelines.

The board or president of your university does, and they are ultimately responsible (as any CEO would be) for what goes on both physically and virtually on their campus.


—j
Jim Kinney
2017-10-05 15:06:25 UTC
Permalink
+1!

I'm the admin. The machines are MINE. I WILL have the authority to do what
is needed to fulfill my responsibilities. I have kicked a student off
systems before (hyper-extreme case). His code was utter crap and resulted
in a reboot EVERY time he ran it (all nodes on the cluster would lock up -
he had been given sudo by his advisor - once I took it away his code
wouldn't run at all). His advisor whined and complained and I made the case
that his idiot student was effectively the turd in the punchbowl for the
cluster (I think I actually used those words) and it broke things for
everyone else. Advisor got him a workstation to use, it was offline most of
the time as he still wrote crap. When he finally got sudo on the
workstation (I had to cough it up as the box belonged to the advisor) the
idiot ran his code as root. Justice happened swiftly as his garbage trashed
the hard drive and deleted his home dir. I don't back up standalone
machines like that so his work was gone. good riddance. He (without
permission) installed ubuntu, was the admin and proceeded to crash that box
daily/hourly until he was finally shown the door for failing to complete a
single project assignment.

Dev make terrible admins. Devs make terrible security officers. Devs need
to be throttled by hardware so they learn how to write efficient code. One
faculty gets the cheapest old crap machines for new students to run their
code on so they are forced to make improvements in performance. It's mostly
a pretty good idea.
Post by Todor Fassl
This list has really come through for me again just with ideas I can
bounce around. I'll have to tread lightly though. About a year ago, I
configured the machines in our shared labs to log someone off after 15
minutes of inactivity. Believe it or not, that was controversial. Not with
the faculty but with the students using the labs. It was an easy win for me
but some of the students went to the faculty with complaints. Wait, you're
actually defending your right to walk away from a workstation in a public
place still logged in? In a way that's not such a bad thing. This is a
university and the students should run the place. But they need a referee.
TL;DR: I disagree. Follow compliance guidelines. People could go to jail
in a governmental (college) setting. The laws are the referee, not us.
I respectfully disagree.
Let me flex my ego muscle for a moment: “Jerald Sheets, Lead Security
Architect - Infrastructure Hardening - PayPal, Inc” at your service.
Any university is bound by common rules and guidelines to adhere to
“generally accepted security standards” in regards to systems connected to
the university infrastructure for the purposes of satisfying protection of
PII of students, faculty, and staff. This is also governed by FRPA and can
include jail time under the right circumstances
 Just saying.
Regardless of physical, network, and other similar security controls in
place, host security demands that TMOUT be on, enabled, and unable to be
circumvented.
*HIPAA:*
A covered entity should activate a password-protected screensaver that
automatically prevents unauthorized users from viewing or accessing
electronic protected health information from unattended electronic
information system devices.
*SOX:*
User session timeout is defined and in place for authorized users Audit
and review user privil
*ITIL:*
ISO/IEC 27002
11.5.5 Session time-out
*PCI:*
8.5.15 Idle Session Timeout threshold
- Disconnect Idle session is less than or equal to 15 minutes
These are the reasonings I use with CISOs, CIOs and CEOs to explain to
them that just because Devs are yelling about various security controls
does not mean they get to have what they want.
“You could go to jail over this one” has been uttered more than once by
me, and I count it a great responsibility on my part as a Security guy
first and a Systems guy second to ensure that the law and various
compliance guidelines are followed.
If I have people continuing to demand non-compliance, I work up a business
case for them to sign their name to “when the auditors come around on this,
they’ll want to know who’s responsible and who should be prosecuted in the
event of data loss or breach”. That usually gets their attention and gets
them to think twice about these things.
The balance is ALWAYS security versus usability, and the security
guidelines we are both legally and honor-bound to follow are the “referee”
here. You aren’t. I implement what’s given to me within the confines of
the law, and I do not step outside of it. No student, teacher, full or
associate professor, department chair has the authority to overrule these
guidelines.
The board or president of your university does, and they are ultimately
responsible (as any CEO would be) for what goes on both physically and
virtually on their campus.
—j
_______________________________________________
Ale mailing list
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
--
--
James P. Kinney III

Every time you stop a school, you will have to build a jail. What you gain
at one end you lose at the other. It's like feeding a dog on his own tail.
It won't fatten the dog.
- Speech 11/23/1900 Mark Twain


*http://heretothereideas.blogspot.com/
<http://heretothereideas.blogspot.com/>*
Jerald Sheets
2017-10-05 15:32:33 UTC
Permalink
Dev make terrible admins. Devs make terrible security officers. Devs need to be throttled by hardware so they learn how to write efficient code. One faculty gets the cheapest old crap machines for new students to run their code on so they are forced to make improvements in performance. It's mostly a pretty good idea.
This is where we “DevSecOps”-ians come in. I democratize the entire security process so the devs can secure via API calls and document the whole shooting match. Best part is when they go to the CI pipeline and they haven’t written tests nor have they implemented security. If those things aren’t in their code, it gets kicked back out and refuses merge.

They get all mad, call meetings with directors and such
 .”Well, your developer neither leveraged the documented security APIs nor did they write tests for their code. We don’t allow that into our CI pipeline, because that mess could make it to production”. “I don’t know a single developer that writes tests the way you’re saying.” I sit back and grin
. “Why not, the sysadmin (me) uses TDD for ALL his automation code. If an ops guy does, why in the heck does a developer NOT?


I love my job sometimes.

—j
Todor Fassl
2017-10-05 15:57:21 UTC
Permalink
Hmmm... I kind of feel sorry for that student. Most people aren't idiots
on purpose. That kid's dream might have been crushed. Some of us have a
talent for making computers work and some of us don't. If you don't have
the talent, you ought never to persue a career doing exactly that. Even
that was kind of stupid. But that student wasted a lot of time and money
persuing a career only to find out he can't cut it. That's sad.

IMO, the customer is always right. The students are the customers.
Obviously, I'm not saying they ought to be able to ignore security
regulations or even common sense. It is a good thing they have this
sense of entitlement because they are entitled. These aren't my
machines. The students paid for them, not me. Well, the federal and
state governments paid a hefty proportion too. As a taxpayer, I paid for
them too. But the taxpayers provided the funding with the understanding
they'd be used for educational purposes. They're not mine.

No business says the customer is always right meaning it to be taken
literally. They say that to remind themselves and their employees that
customers are not the enemy. I think if you don't tell yourself the
customer is always right, it is too easy to become bad IT guy.
Post by Jim Kinney
+1!
I'm the admin. The machines are MINE. I WILL have the authority to do
what is needed to fulfill my responsibilities. I have kicked a student
off systems before (hyper-extreme case). His code was utter crap and
resulted in a reboot EVERY time he ran it (all nodes on the cluster
would lock up - he had been given sudo by his advisor - once I took it
away his code wouldn't run at all). His advisor whined and complained
and I made the case that his idiot student was effectively the turd in
the punchbowl for the cluster (I think I actually used those words) and
it broke things for everyone else. Advisor got him a workstation to use,
it was offline most of the time as he still wrote crap. When he finally
got sudo on the workstation (I had to cough it up as the box belonged to
the advisor) the idiot ran his code as root. Justice happened swiftly as
his garbage trashed the hard drive and deleted his home dir. I don't
back up standalone machines like that so his work was gone. good
riddance. He (without permission) installed ubuntu, was the admin and
proceeded to crash that box daily/hourly until he was finally shown the
door for failing to complete a single project assignment.
Dev make terrible admins. Devs make terrible security officers. Devs
need to be throttled by hardware so they learn how to write efficient
code. One faculty gets the cheapest old crap machines for new students
to run their code on so they are forced to make improvements in
performance. It's mostly a pretty good idea.
Post by Todor Fassl
This list has really come through for me again just with ideas I
can bounce around. I'll have to tread lightly though. About a year
ago, I configured the machines in our shared labs to log someone
off after 15 minutes of inactivity. Believe it or not, that was
controversial. Not with the faculty but with the students using
the labs. It was an easy win for me but some of the students went
to the faculty with complaints. Wait, you're actually defending
your right to walk away from a workstation in a public place still
logged in? In a way that's not such a bad thing. This is a
university and the students should run the place. But they need a referee.
TL;DR: I disagree.  Follow compliance guidelines. People could go to
jail in a governmental (college) setting. The laws are the referee,
not us.
I respectfully disagree.
Let me flex my ego muscle for a moment:  “Jerald Sheets, Lead
Security Architect - Infrastructure Hardening - PayPal, Inc” at your
service.
Any university is bound by common rules and guidelines to adhere to
“generally accepted security standards” in regards to systems
connected to the university infrastructure for the purposes of
satisfying protection of PII of students, faculty, and staff.  This
is also governed by FRPA and can include jail time under the right
circumstances…  Just saying.
Regardless of physical, network, and other similar security controls
in place, host security demands that TMOUT be on, enabled, and
unable to be circumvented.
*HIPAA:*
A covered entity should activate a password-protected screensaver
that automatically prevents unauthorized users from viewing or
accessing electronic protected health information from unattended
electronic information system devices.
*SOX:*
User session timeout is defined and in place for authorized users
Audit and review user privil
*ITIL:*
ISO/IEC 27002
11.5.5 Session time-out
*PCI:*
8.5.15 Idle Session Timeout threshold
* Disconnect Idle session is less than or equal to 15 minutes
These are the reasonings I use with CISOs, CIOs and CEOs to explain
to them that just because Devs are yelling about various security
controls does not mean they get to have what they want.
“You could go to jail over this one” has been uttered more than once
by me, and I count it a great responsibility on my part as a
Security guy first and a Systems guy second to ensure that the law
and various compliance guidelines are followed.
If I have people continuing to demand non-compliance, I work up a
business case for them to sign their name to “when the auditors come
around on this, they’ll want to know who’s responsible and who
should be prosecuted in the event of data loss or breach”.  That
usually gets their attention and gets them to think twice about
these things.
The balance is ALWAYS security versus usability, and the security
guidelines we are both legally and honor-bound to follow are the
“referee” here.  You aren’t.  I implement what’s given to me within
the confines of the law, and I do not step outside of it.  No
student, teacher, full or associate professor, department chair has
the authority to overrule these guidelines.
The board or president of your university does, and they are
ultimately responsible (as any CEO would be) for what goes on both
physically and virtually on their campus.
—j
_______________________________________________
Ale mailing list
http://mail.ale.org/mailman/listinfo/ale
<http://mail.ale.org/mailman/listinfo/ale>
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
<http://mail.ale.org/mailman/listinfo>
--
--
James P. Kinney III
////
////Every time you stop a school, you will have to build a jail. What
you gain at one end you lose at the other. It's like feeding a dog on
his own tail. It won't fatten the dog.
- Speech 11/23/1900 Mark Twain
////
http://heretothereideas.blogspot.com/
////
_______________________________________________
Ale mailing list
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
--
Todd
_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/list
Jim Kinney
2017-10-05 16:47:28 UTC
Permalink
If that student hadn't been counseled, warned repeatedly, coached on the
errors, etc for nearly 6 months, yeah, it would have been a sad thing. He
refused to learn, copied (plagarized) code from websites and basically
showed no willingness to improve. My only regret is it took 6 months.

As for students being the customers in education settings: Nope. They are
not the customer (usually). They are the raw material. As an educator (I am
one as the head IT guy) I am reshaping students into societally usable
people. The parents are the prime customers as they are often funding the
education. Yet they have to be treated with very long poles or else they
start demanding accolades Jr. hasn't earned (been there, quit because of
it). Society at large is the ultimate customer even for a private
institution. Sadly, some raw material just doesn't meet quality standards.
I've never used that mentality as a reason to have a permanent percentage
that will fail. That is utterly wrong. It is right however to use the QA
mentality as a cut-off for poor performance. I've always viewed education
as a manufacturing process. Sometimes a product has to go through the forge
more than once. I was one of those. The double-struck coins are the
sharpest and most prized. The students that fall down and pick themselves
back up and go again are the ones I've seen that really apply themselves in
nearly everything afterwards.
Post by Todor Fassl
Hmmm... I kind of feel sorry for that student. Most people aren't idiots
on purpose. That kid's dream might have been crushed. Some of us have a
talent for making computers work and some of us don't. If you don't have
the talent, you ought never to persue a career doing exactly that. Even
that was kind of stupid. But that student wasted a lot of time and money
persuing a career only to find out he can't cut it. That's sad.
IMO, the customer is always right. The students are the customers.
Obviously, I'm not saying they ought to be able to ignore security
regulations or even common sense. It is a good thing they have this sense
of entitlement because they are entitled. These aren't my machines. The
students paid for them, not me. Well, the federal and state governments
paid a hefty proportion too. As a taxpayer, I paid for them too. But the
taxpayers provided the funding with the understanding they'd be used for
educational purposes. They're not mine.
No business says the customer is always right meaning it to be taken
literally. They say that to remind themselves and their employees that
customers are not the enemy. I think if you don't tell yourself the
customer is always right, it is too easy to become bad IT guy.
Post by Jim Kinney
+1!
I'm the admin. The machines are MINE. I WILL have the authority to do
what is needed to fulfill my responsibilities. I have kicked a student off
systems before (hyper-extreme case). His code was utter crap and resulted
in a reboot EVERY time he ran it (all nodes on the cluster would lock up -
he had been given sudo by his advisor - once I took it away his code
wouldn't run at all). His advisor whined and complained and I made the case
that his idiot student was effectively the turd in the punchbowl for the
cluster (I think I actually used those words) and it broke things for
everyone else. Advisor got him a workstation to use, it was offline most of
the time as he still wrote crap. When he finally got sudo on the
workstation (I had to cough it up as the box belonged to the advisor) the
idiot ran his code as root. Justice happened swiftly as his garbage trashed
the hard drive and deleted his home dir. I don't back up standalone
machines like that so his work was gone. good riddance. He (without
permission) installed ubuntu, was the admin and proceeded to crash that box
daily/hourly until he was finally shown the door for failing to complete a
single project assignment.
Dev make terrible admins. Devs make terrible security officers. Devs need
to be throttled by hardware so they learn how to write efficient code. One
faculty gets the cheapest old crap machines for new students to run their
code on so they are forced to make improvements in performance. It's mostly
a pretty good idea.
Post by Todor Fassl
This list has really come through for me again just with ideas I
can bounce around. I'll have to tread lightly though. About a year
ago, I configured the machines in our shared labs to log someone
off after 15 minutes of inactivity. Believe it or not, that was
controversial. Not with the faculty but with the students using
the labs. It was an easy win for me but some of the students went
to the faculty with complaints. Wait, you're actually defending
your right to walk away from a workstation in a public place still
logged in? In a way that's not such a bad thing. This is a
university and the students should run the place. But they need a referee.
TL;DR: I disagree. Follow compliance guidelines. People could go to
jail in a governmental (college) setting. The laws are the referee,
not us.
I respectfully disagree.
Let me flex my ego muscle for a moment: “Jerald Sheets, Lead
Security Architect - Infrastructure Hardening - PayPal, Inc” at your
service.
Any university is bound by common rules and guidelines to adhere to
“generally accepted security standards” in regards to systems
connected to the university infrastructure for the purposes of
satisfying protection of PII of students, faculty, and staff. This
is also governed by FRPA and can include jail time under the right
circumstances
 Just saying.
Regardless of physical, network, and other similar security controls
in place, host security demands that TMOUT be on, enabled, and
unable to be circumvented.
*HIPAA:*
A covered entity should activate a password-protected screensaver
that automatically prevents unauthorized users from viewing or
accessing electronic protected health information from unattended
electronic information system devices.
*SOX:*
User session timeout is defined and in place for authorized users
Audit and review user privil
*ITIL:*
ISO/IEC 27002
11.5.5 Session time-out
*PCI:*
8.5.15 Idle Session Timeout threshold
* Disconnect Idle session is less than or equal to 15 minutes
These are the reasonings I use with CISOs, CIOs and CEOs to explain
to them that just because Devs are yelling about various security
controls does not mean they get to have what they want.
“You could go to jail over this one” has been uttered more than once
by me, and I count it a great responsibility on my part as a
Security guy first and a Systems guy second to ensure that the law
and various compliance guidelines are followed.
If I have people continuing to demand non-compliance, I work up a
business case for them to sign their name to “when the auditors come
around on this, they’ll want to know who’s responsible and who
should be prosecuted in the event of data loss or breach”. That
usually gets their attention and gets them to think twice about
these things.
The balance is ALWAYS security versus usability, and the security
guidelines we are both legally and honor-bound to follow are the
“referee” here. You aren’t. I implement what’s given to me within
the confines of the law, and I do not step outside of it. No
student, teacher, full or associate professor, department chair has
the authority to overrule these guidelines.
The board or president of your university does, and they are
ultimately responsible (as any CEO would be) for what goes on both
physically and virtually on their campus.
—j
_______________________________________________
Ale mailing list
http://mail.ale.org/mailman/listinfo/ale
<http://mail.ale.org/mailman/listinfo/ale>
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
<http://mail.ale.org/mailman/listinfo>
--
--
James P. Kinney III
////
////Every time you stop a school, you will have to build a jail. What you
gain at one end you lose at the other. It's like feeding a dog on his own
tail. It won't fatten the dog.
- Speech 11/23/1900 Mark Twain
////
http://heretothereideas.blogspot.com/
////
_______________________________________________
Ale mailing list
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
--
Todd
_______________________________________________
Ale mailing list
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
--
--
James P. Kinney III

Every time you stop a school, you will have to build a jail. What you gain
at one end you lose at the other. It's like feeding a dog on his own tail.
It won't fatten the dog.
- Speech 11/23/1900 Mark Twain


*http://heretothereideas.blogspot.com/
<http://heretothereideas.blogspot.com/>*
Lightner, Jeffrey
2017-10-05 16:48:34 UTC
Permalink
So you're saying the BOFH is NOT a role model to which we should aspire? :-)

-----Original Message-----
From: ale-***@ale.org [mailto:ale-***@ale.org] On Behalf Of Todor Fassl
Sent: Thursday, October 05, 2017 11:57 AM
To: Atlanta Linux Enthusiasts
Subject: Re: [ale] shared research server help

Hmmm... I kind of feel sorry for that student. Most people aren't idiots on purpose. That kid's dream might have been crushed. Some of us have a talent for making computers work and some of us don't. If you don't have the talent, you ought never to persue a career doing exactly that. Even that was kind of stupid. But that student wasted a lot of time and money persuing a career only to find out he can't cut it. That's sad.

IMO, the customer is always right. The students are the customers.
Obviously, I'm not saying they ought to be able to ignore security regulations or even common sense. It is a good thing they have this sense of entitlement because they are entitled. These aren't my machines. The students paid for them, not me. Well, the federal and state governments paid a hefty proportion too. As a taxpayer, I paid for them too. But the taxpayers provided the funding with the understanding they'd be used for educational purposes. They're not mine.

No business says the customer is always right meaning it to be taken literally. They say that to remind themselves and their employees that customers are not the enemy. I think if you don't tell yourself the customer is always right, it is too easy to become bad IT guy.


_______________________________________________
Ale mailing list
***@ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
Jim Kinney
2017-10-05 17:29:17 UTC
Permalink
Sometime the B is for "Benign", sometimes it's for "Brutal" but it's always
understood to be "Bastard" in all use cases :-)
Post by Lightner, Jeffrey
So you're saying the BOFH is NOT a role model to which we should aspire?
:-)
-----Original Message-----
Sent: Thursday, October 05, 2017 11:57 AM
To: Atlanta Linux Enthusiasts
Subject: Re: [ale] shared research server help
Hmmm... I kind of feel sorry for that student. Most people aren't idiots
on purpose. That kid's dream might have been crushed. Some of us have a
talent for making computers work and some of us don't. If you don't have
the talent, you ought never to persue a career doing exactly that. Even
that was kind of stupid. But that student wasted a lot of time and money
persuing a career only to find out he can't cut it. That's sad.
IMO, the customer is always right. The students are the customers.
Obviously, I'm not saying they ought to be able to ignore security
regulations or even common sense. It is a good thing they have this sense
of entitlement because they are entitled. These aren't my machines. The
students paid for them, not me. Well, the federal and state governments
paid a hefty proportion too. As a taxpayer, I paid for them too. But the
taxpayers provided the funding with the understanding they'd be used for
educational purposes. They're not mine.
No business says the customer is always right meaning it to be taken
literally. They say that to remind themselves and their employees that
customers are not the enemy. I think if you don't tell yourself the
customer is always right, it is too easy to become bad IT guy.
_______________________________________________
Ale mailing list
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
--
--
James P. Kinney III

Every time you stop a school, you will have to build a jail. What you gain
at one end you lose at the other. It's like feeding a dog on his own tail.
It won't fatten the dog.
- Speech 11/23/1900 Mark Twain


*http://heretothereideas.blogspot.com/
<http://heretothereideas.blogspot.com/>*
Jerald Sheets
2017-10-05 19:29:54 UTC
Permalink
Post by Lightner, Jeffrey
So you're saying the BOFH is NOT a role model to which we should aspire? :-)
Woah, WOAH!

Let’s not get crazy.


—j

Jim Kinney
2017-10-05 14:50:21 UTC
Permalink
The politics can get messy. Jeffry's later post of providing data of the
hog issue is very correct.

I use ganglia to provide real time display of cluster usage (RAM, CPU,
networking, adding GPU now).

I guess I'm pretty lucky as I'm also "just the IT guy" but I get to make it
plain that my job is to help them graduate. Yes, I do spend time
individually helping each student learn how to not break things. I also
make it very plain that a system crash is an extreme failure on their part.
Sure, I have to reboot a machine (YAY addressable PDUs and IPMI!) but it
breaks _their_ work worse. My current quest is to beat them all with the
clue-by-four of parallelism. LEARN how to think in parallel processes.
LEARN how to write code that can support multiple threads. LEARN how to
split large data sets into chunks that can be processed by multiple
systems/cores/nodes/gpus, etc.

latest fun: machine learning on image analysis for eye tracking from a
video for ADHD work - generates video with 15K frames; each frame has a
data set of eye position in pixel coordinates per eye; process was trained
on worst design at all - each frame is cropped to generate an enlarged
image of each individual eye - that's now run again to determine gaze
direction - 15,000 frames -> 30,000 images => all single threaded. <sigh>

<rant> I don't come from a comp-sci background so I've had to figure out a
lot on my own. It seems the younger programmers are more and more
disconnected from the reality of the hardware they use. "Load this data set
and start my algorithm" is the mindset. The engineering mentality of HOW to
do the process using both hardware and software is missing. </rant>
Post by Todor Fassl
Right, Jim, another aspect of this problem is that most of the students
don't even realize they need to be careful, much less how to be careful.
"What? Is there a problem with me asking for 500 gigabytes of ram?" Well,
the machine has only 256. But I'm just the IT guy and it's not my place to
demand that these students demonstrate a basic understanding of sharing
resources before getting started. The instructors would never go for that.
I am pretty much stuck providing that informally on a one-to-one basis. But
I think it would be valuable for me to work on automating that somehow.
Pointers to the wiki, stuff like that.
Somebody emailled me off list and made a really good point. The key, I
think is information. Well, that and peer pressure. I know nagios can
trigger an alert when a machine runs low on ram or cpu cycles. It might
even be able to determine who is running the procs that are causing it. I
can at least put all the users in a nagios group and send them alerts when
a research server is near an OOM event. I'll have to see what kind of
granularity I can get out of nagios and experiment with who gets notified.
I can do things like keep widening the group that gets notified of an event
if the original setup turns out to be ineffective.
This list has really come through for me again just with ideas I can
bounce around. I'll have to tread lightly though. About a year ago, I
configured the machines in our shared labs to log someone off after 15
minutes of inactivity. Believe it or not, that was controversial. Not with
the faculty but with the students using the labs. It was an easy win for me
but some of the students went to the faculty with complaints. Wait, you're
actually defending your right to walk away from a workstation in a public
place still logged in? In a way that's not such a bad thing. This is a
university and the students should run the place. But they need a referee.
Post by Jim Kinney
A tool like torque or slurm is really your best solution to intensive
shared resources. It prevents 2 big jobs from eating the same machine and
can also encourage users to code better to manage resources better so they
can run more jobs.
I have the same problem. One heavy gpu machine (4 tesla P100) only has 64
G ram. Student tried to load in 200+G of data into ram.
A few crashes later he can run 2 jobs at once, each only eats 30G ram and one p100.
I manage a group of research servers for grad students at a university.
The grad students use these machines to do the research for their Ph.D
theses. The problem is that they pretty regularly kill off each other's
programs by using up all the ram. Most of the machines have 256G of ram.
One kid uses 200Gb and another 100Gb and one or the other, often both,
die. Sometimes they bringthe machines down by hogging the cpu or using
up all the ram. Well, the machines never crash but they might as well be
down.
We really, really don't want to force them to use a scheduling system
like slurm. They are just learnng and they might run the same piece of
code 20 times in an hour.
Is there a way to set a limit on the amount of ram all of a user's
processes can use? If so, we were thinking of setting it at 50% of the
on-board ram. Then it would take 3 students together to trash a machine.
It might still happen but it would be a lot more infrequent.
Any other suggestions? Anything at all? Just keep in mind that we really
want to keep it easy for the students to play around.
--
Sent from my Android device with K-9 Mail. All tyopes are thumb related
and reflect authenticity.
--
Todd
--
--
James P. Kinney III

Every time you stop a school, you will have to build a jail. What you gain
at one end you lose at the other. It's like feeding a dog on his own tail.
It won't fatten the dog.
- Speech 11/23/1900 Mark Twain


*http://heretothereideas.blogspot.com/
<http://heretothereideas.blogspot.com/>*
Jerald Sheets
2017-10-05 15:27:55 UTC
Permalink
<rant> I don't come from a comp-sci background so I've had to figure out a lot on my own. It seems the younger programmers are more and more disconnected from the reality of the hardware they use. "Load this data set and start my algorithm" is the mindset. The engineering mentality of HOW to do the process using both hardware and software is missing. </rant>
Hey, you know good and well I was a music major. I only just finished last year in Theology, so there’s that. :-D


—j
Solomon Peachy
2017-10-05 16:36:09 UTC
Permalink
Post by Jim Kinney
<rant> I don't come from a comp-sci background so I've had to figure out a
lot on my own. It seems the younger programmers are more and more
disconnected from the reality of the hardware they use. "Load this data set
and start my algorithm" is the mindset. The engineering mentality of HOW to
do the process using both hardware and software is missing. </rant>
This lament has *always* been the norm.

Kids today aren't any more clueless about hardware than they were 20
years ago -- they were as a whole pretty bad back then too. Kids today
aren't any worse with parallelization then they were 20 years ago
either, as they were pretty terrible then too. (Heck, if anything,
things are _better_ today due to better tools)

Today, just as it was before, some few "get it", but most don't. Just
as it is in all walks of life.

- Solomon
--
Solomon Peachy pizza at shaftnet dot org
Coconut Creek, FL ^^ (email/xmpp) ^^
Quidquid latine dictum sit, altum videtur.
Loading...