I'm miserably aware of some large-scale admin stuff I need to implement. Always a challenge with 1.5 people full time and 3 full time people's worth of work.
Highest priority is to host our own hub. The devels are all "this is just research" until they bypass everything and point a proxy they control at a vm running their latest container.
Post by Jim KinneyPost by Jim KinneyFrom a sysadmin perspective, containers make it far to easy to bypass
all security protocols. Until it's live, it's a binary blob waiting to
suck in code from unknown sources and send information to unknown
locations. Virtual machine security is better and more understood than
containers.
You host your own hub. Thatâs the answer. Weâre prevented from
âreaching outâ to the ânet for anything at all. Iâve built my own
container registries internally, and only pull images *I* have rolled
from there. I never touch DockerHub.
Post by Jim KinneyUntil I can get a SHA256 signed docker container with sig I trust, I
can't allow them to touch my storage cluster.
Again, the setup is necessary, but you can completely lock it down to
your own internal resources. This is a non-issue.
Post by Jim KinneyHow do containers get updated for security patches? They don't. Toss
it and rebuild.
You do it. Donât rely on Docker or the community. Roll your own
images (just like folks who use custom AMIs) and maintain full control
of âall the thingsâ.
Post by Jim KinneyThat sets up a churn of install new containers which will in time
dull the build process security focus.
Which is why we automate. I personally use Puppet, as that is my SME
domain, but Iâve seen workflows for both Chef and Ansible. Also a
non-issue.
Post by Jim KinneyTime passes and a mission critical process is running on a gaping
built it got a better job offer and left.
All containers should be curated by Systems. The Developers should
submit them for security scanning, or you should employ a DevSecOps
model for deployment. i.e., federate security scanning by providing
OS, App, transport, penetration, and network security testing as APIs
that devs can leverage instead of leaving them to security. Left to
their own devices, unreasonable deploy timelines set for them, and
golf-playing pointy-hairs with unreasonable ship date requirements,
itâll never happen.
This should all be automated and part of a security CI/CD pipeline
without which a âpassâ from the security field, cannot ever be deployed
into production. This is how we do it.
Post by Jim KinneyDevelopers don't have the responsibility for the integrity of the
system, network, environment. Just their code. The sysadmin is on the
hook for that blob of festering code rot that lets <fill in a cracking
team name here> gain root in a container attached to a few TB of
patient/banking/insurance/ANYTHING data and suddenly the sysadmin makes
headline news .
Which doesnât really happen in containerized applications. ESPECIALLY
if youâre orchestrating them properly, and the curation of the
containers is where they belong: in Systems and Security circles.
FUD doesnât play well here, and this smacks of FUD to me.
Not to call you out, Jim. :D
The real issue is automation should be a core component of Security,
Operations, QA, Development, AND Deployment. None of this crap should
be touched with human hands any more. Thatâs how you end up with an
Equifax website with a U/P of admin:admin, thus this morningâs news.
âjms