How can we help?

Darn, another update headache – My Git server

There’s a certain kind of sigh that only sysadmins and developers make—the one that comes out when you realize your Git server is broken again after a system update. I made that sigh this morning. Loudly. Right after I tried to push some code and was greeted by everyone's favorite ambiguous failure:

“fatal: unable to accessInternal Server Error (500)”.

Great.

I wasn’t planning on playing sysadmin today, but here we are.

It Always Starts with an Update

I get it—updates are necessary. Security patches, performance improvements, all that good stuff. But why is it that every third update somehow breaks SSH access, or messes with permissions, or silently resets the Git daemon configuration I painstakingly set up three versions ago?

This time, the issue was sneakier. No crashing services, no port conflicts, no “missing dependency” errors. Just… nothing. My Git server (self-hosted on a modest VPS) was humming along, but no one could clone, push, or pull.

A Game of Debugging

First stop: the logs.

Apache? Clean.

SSH? Fine.

Git? Silent.

Not helpful.

After a little digging, I found out the update had quietly upgraded Git and reconfigured some system-wide hooks that I had tweaked long ago. Oh, and it reset file permissions on /var/git/repos. That was fun to discover after 40 minutes of head-scratching.

Git Permissions, Again?

You’d think I’d learn by now.

Git servers are temperamental about file ownership and access. If you’re running over SSH, the user account running the Git process has to own the repo and have proper access rights. But this update changed ownership to root:root. Why? Who knows. Probably a packaging script assuming default paths.

The fix?

A recursive chown -R git:git /var/git/repos and resetting some post-receive hooks. After that, we were back in business.

Why I Still Host My Own Git Server

I could just move everything to GitHub or GitLab, right? And sometimes I wonder why I don’t. But there’s something satisfying about running my own Git server. No vendor lock-in, full control, and better privacy for internal projects. It’s a learning experience, too—though on days like this, I question that value.

Still, when it works, it works beautifully. SSH keys, custom hooks, CI triggers—it all runs fast and clean. Until, of course, the next update comes knocking.

Lessons Learned (Again)

Automate backups before every update. Yes, I skipped it this time.

Track config file changes with Git or tools like etckeeper.

Isolate services—containers or VMs help avoid system-wide side effects.

Always test updates in a staging environment. It sounds obvious, but when you’re busy, it’s easy to just say “yes” to that apt upgrade.

So yes—another update, another morning lost to debugging. But at least now the server is humming again… until next time.