OpenBSD is one of my favourite operating systems. I’ve been using it on my workstations and laptops for almost 10 years now. I was even lucky enough to be able to use it on my work desktops and laptops. But this isn’t where OpenBSD really shines.
Two major problems with consumer-grade routers provided by internet service providers:
I discovered a whole suite of cool network software in the default OpenBSD installation. It’s what makes OpenBSD perfect for edge/gateway network devices. So we’ve got some nice software – what about some hardware to run it on?
Jeff Atwood of Coding Horror, Stack Overflow etc. fame posted the blog article The Scooter Computer, saying of standard residential network devices:
Let’s face it: this is just a little box that runs a chopped up version of Linux, with a bit of specialized wireless hardware and multiple antennas tacked on … that we’re not even using. So when it came time to upgrade, we wondered:
Why not just go with a small box that can run a real, full Linux distro? Wouldn’t that be simpler and easier to keep up to date?
So then I wondered: why not run real, full OpenBSD on these boxes? Hardware support for OpenBSD isn’t as complete as for Linux, though… Challenge accepted!
My most recent purchase is the XCY Firewall Appliance Mini PC for AUD140 (approx. $90 US, 85€). I affectionately call these devices chinaboxes. This chinabox came in some tidy cardboard, packed with foam. There’s also a VESA mounting plate thing and SATA cables:
It feels solid; it’s surprisingly heavy for its size.
It’s easy to get inside the box; just 4 Phillips head screws. Inside seems relatively neat and tidy:
Time to power it on! Of course it comes with some kind of maybe-kinda-probably-not licensed Windows (10?) with user “Admini”:
But we’re not interested in Windows right now - if ever ;)
So I rebooted and got into the EVALUATION COPY
BIOS:
Running OpenBSD on x86 PCs often involves turning off or tweaking a bunch of things in the BIOS. But I only ended up doing a couple of minor things. To get the device to behave more like other networking equipment, I set the device to always power back on after power loss:
Secure Boot is unsupported by OpenBSD so I disabled that:
Finally I found some setting mysteriously called “OS Selection”. I reset this from Windows to Linux. If anyone has more info on what this could be please let me know!
Booting into OpenBSD over the network by PXE and also by USB started off fine:
For those unfamiliar, the OpenBSD process is super straightforward with basic plain text prompts:
Success!
Where it will sit for a (long) while:
And as a bonus my old firewall still humming along:
See also dmesg
output at the bottom of this article.
There’s a lot of different hardware configurations available from the manufacturer. It’s probably best to see the original item listing at XCY Firewall Appliance Mini PC. In particular it would be good to upgrade this 10+ year-old CPU to something like the Intel N100.
It’s highly recommended to read through the replies to Another successful OpenBSD setup. Some highlights follow.
cmnybo@discuss.tchncs.de asked:
Do any of those cheap Chinese computers ever get any firmware or bios updates?
None that I’m aware of, which sucks. However benja@ohnepunktundkomma.org let us know that CoreBoot may be available:
some of this boxes can run with #coreboot, so the #firmware is #opensource too. Protectli ported coreboot for their hardware, and with a little research you can find this hardware on aliexpress, of course under a different name.
Sorry for my ignorance I tried googling but what is this exactly? A server for files or? A media server?
Anything! It’s now a plain old server connected to the internet with a static IPv4 address and a /48 IPv6 subnet! relayd(8) is used as a HTTP reverse proxy and generic TCP proxy for internet services and custom software I write. For example:
dns.srcbeat.com:853
,that tp-link probably negates anything remotely resembling security on its own.
Yes having a managed switch is so surplus to requirements. I bought this one in a rush as it was cheap and had PoE. If anyone knows a 8-port unmanaged PoE switch please let me know! Or reply to Another successful OpenBSD setup via ActivityPub (Mastodon, Lemmy, KBin… you all know who you are ;) ).
TODO
]]>Ironically Stack Diary - the site which broke this story - serves Koko Analytics, which now ignores the Do Not Track (DNT) header (see commit 6890f3c).
Why? Koko Analytics devs cite Mozilla’s recommendation to ignore DNT. Safari no longer supports DNT. Chrome and Firefox do, but it’s off by default.
Global Privacy Control (GPC) is DNT’s spiritual successor, apparently. Mozilla funded the GPC implementation in Firefox back in December 2021. Koko Analytics does not support GPC. GoatCounter, a similar project, also ignores DNT and GPC remains unsupported (see Why GoatCounter ignores Do Not Track).
Thankfully uBlock Origin blocks loading the scripts altogether.
]]>They were still using Windows XP when they started to use Dropbox rather than USB drives. They travel to South Pacific islands for business
]]>There’s a new ecosystem growing in its own little universe, deliberately ignorant of the operating system on which it relies.
It reminds me of the Java space: using the software means wrangling gargantuan XML files to operate a system which poorly reimplements some existing feature of an OS. It’s the so-called “cloud-native” ecosystem.
Nobody really uses Kubernetes for day-to-day work, and it shows. Where UNIX concepts like files and pipes exist from OS internals up to interaction by actual people, cloud-native tooling feels like it’s meant for bureaucrats in well-paid jobs. There’s real culture shock coming from a - let’s say - “traditional UNIX background” where computing doesn’t require filling out YAML forms.
To monitor my home network and a couple of others I manage, I use
VictoriaMetrics, a fork of Prometheus suitable for more
resource-constrained environments.
There’s a command called vmalert
which manages sending alerts.
You can’t run vmalert
without flags:
% vmalert
2023-10-09723:16:34.0432 fatal victoriaMetrics/app/vmalert/main.go:151 failed to init: failed to init datasource: datasource.url empty
So I need some flags. Which ones?
% vmalert -h | wc -w
3300
500 words in to the over 3,000 word dump, I gave up. I guessed the smallest command:
% vmalert -datasource.url http://127.0.0.1:8428
Some questions going through my head:
vmetrics
is listening on the loopback address on the default port?localhost
instead of the IPv4 address 127.0.0.1
?dnsresolver.address
?-datasource.disableKeepAlive
. Why doesn’t other software let me specify this?The answer is that the operating system upon which vmalert
runs implements established conventions for all this - transparently.
But in the cloud-native world, like in the Java world, there’s a
tendency towards that verbose, industrial, “sophisticated” way of
running software.
When you finally specify all those flags,
neatly namespaced with .
to make it feel all so very organised,
you feel like you’ve achieved something.
Sunk-cost fallacy kicks in:
look at all those flags that I’ve tuned just so - it must be robust and performant!
“Cloud Engineers” get paid $150K+ to fiddle with these strings and make sure it’s all escaped/delimited correctly in YAML files. It’s a fucking mess. I’m ashamed enough that I can’t really apply to these jobs. Maybe writing and running software on servers in the commercial world is not a good fit for someone like me who despises corporate jargon.
Want an asynchronous, hierarchical, recursive, key-value database?
With metadata like modified times and access control built-in?
Sounds pretty fancy!
Files and directories.
And you’d think filesystems would be hard to use.
But they’re not: you open()
, read()
, and close()
without thinking about it.
In the 90s my school taught us files and folders when we were 8 years old.
“Clould-native” software co-exists with corporate jargon. They obscure and complicate in the interest of perpetuating lucrative contracts over productive environments.
Using VictoriaMetrics like this feels like a bit of a strawman argument. But that’s just the way it came out today!
]]>Back in 2019, a large car manufacturer in Germany wanted to send lots of data over MQTT to a so-called “data warehouse” to train future self-driving work and for general business intelligence stuff. We were a team responsible for the system receiving and normalising all the data coming from each car.
Every 2 weeks, around 100 people sat around a speaker barely able to hear each other over a poor phone line. It was getting into summer and buildings in the Netherlands were never designed for the summer heatwaves the country experiences nowadays.
Eventually it was time to hear updates from the so-called “admin” team. This week they had trouble updating version numbers of all the Scala libraries and microservices. One of the dev teams were not communicating the updates to the admin team; version numbers in the HTML table in the wiki were stale. Some higher manager was wondering what was going on.
Fresh from a scrappy old-school IT services company of Linux enthusiasts I could not believe what I was hearing. I was so used to working overtime just to keep customers happy. In the chaos of it all, there was never any time to complain to anybody about not following procedures. Automation was a necessity to save me time and sanity.
Scripting the creation of a HTML table: what a juicy opportunity to take my attention away from the senseless bickering! The admin team had technical writers whom I was desperate to recruit anyway. The release procedure of the immense distributed monolith we were taking care of was totally undocumented; maybe they could help me write it down.
All the code for the system service was written in Scala, spread out over about 100 repositories. The teams used a SBT plugin (of course they did) to manage the version number in each repo. SBT is a bit of an abomination. It’s a huge, spawling build and compilation tool. One handy feature I discovered: I could print the version of each SBT project by running something like
sbt release version
(Or something like that). To print all versions:
for d in work/*
do
(cd $d && echo $d && sbt release version)
done
But this was far from good enough to show to others.
It was dog slow: this took several minutes for 100+ repositories.
And that’s only if you had prepared the repository already.
If not, you would get some obscure error and need to download the gigabytes of dependencies twice
(I actually fixed this for the team! Another story for another time!).
SBT and Scala were such a pain to install for something as trivial as printing a number.
Lastly, sbt release version
output was full of colour and inconsistent line breaks,
making it hard to process the output from the shell.
Like most programming projects, it was well worth it to look at the data rather than think about code. The SBT plugin managed a file whose contents looked like:
// some comment about the version
version in ThisBuild := "1.0.0"
I knew how to extract that number without burning a hole through my over-specced Macbook Pro.
I wrote an awk(1) script, and called it sbtversion
:
BEGIN {
FS = ":="
}
/^( |\t)+?version/ {
i = index($2, "\"")
s = substr($2, i+1)
i = index(s, "\"")
version = substr(s, 1, i-1)
print version
}
All this does is extract the value between the double quotes. Now, to print the version of every repository:
for d in work/*
do
(echo $d && sbtversion $d/version.sbt)
done
This ran in a fraction of a second. A couple of lines of shell later, I had a HTML table:
<table>
<tr>
<td>name</td>
<td>version</td>
</tr>
...
</table>
It was so fast and anyone could run it without installing anything! I was excited to help out the admin team. So I spoke to my manager and asked how I could send this file to them.
“They can’t run any software on their laptops” my manager replied. But maybe we could update that version page instead? “Our team doesn’t have permission”. Damn. “Who does?” I asked. “Let me check and get back to you”.
A few weeks later after this same manager refused to let other teams see our code repositories, so I quit.
Since then, much of the underlying software of that big wasteful project has seen major changes:
You know when you get into a new car and you try to use the entertainment system and things barely work? Bluetooth pairing with your phone doesn’t work, the interface lags so when you tap one button then another you don’t know which button it will register… It’s enough to make you wonder whether the people making these things know what they are even doing. Perhaps this little insight helps you to stop wondering.
]]>Stake published an update following yesterday’s major system outage. The update addresses claims of intentional trade restriction. Stake claim the outage was purely technical in nature:
We have diagnosed the issue. We are hitting a new high in the amount of pings (requests) to our backend database from extremely heightened traffic, especially at market open. This is causing a bottleneck between customers using the platform and our database.
Stake’s platform is closed-source software. Their GitHub profile has no public repositories. A job post reveals that Stake is likely written in Java, backed by a Postgres database, hosted on Amazon Web Services.
Later in their update, Stake directly address intentional trade restriction claims:
There have been suggestions that we are intentionally restricting customers from trading. Some will believe certain things regardless of the facts, but those that know us and have been customers of Stake know just how categorically untrue this is.
Successfully placed trades were executed. But customers saw some of orders trades being cancelled. “I lose close to $800 because you cancelled my buy orders, TWICE” said user @Cry_Vengeance.
Unfortunately customers may not be easily compensated. Whilst not included in Stake’s own terms and conditions, their partners’ terms - which customers must also agree to - include clauses on technical outages. Clause 25 in Partner DriveWealth’s terms for example, state that orders may be cancelled at any time in the event of technical outages.
In a few hours the market reopens. Stake openly admit that customers should expect further disruption.
]]>We plan to gradually redirect traffic from godoc.org to pkg.go.dev over the [next] few weeks. Pages will be redirected in this order:
Static pages (such as https://godoc.org/-/about, https://godoc.org/-/go, https://godoc.org/-/subrepo)
Homepage and search page
Package pages
Badge SVG (You can also generate a new badge if you would like to update your badge link ahead of time.)
Qiu announced the team’s plan to redirect traffic almost exactly one year ago.
But in a sense, the sunsetting of godoc.org this may have been coming for much longer than this.
godoc.org was never really the Go team’s baby.
The team adopted godoc.org years after Gary Burd created it and subsequently stepped away from it back in 2014. As work on modules continued, godoc.org was not updated to keep up. godoc.org received changes but mainly for maintenance, like bumping the hardcoded package size limit.
After a hard look, it seemed worth starting anew, especially since the godoc.org server design, with its single-VM database, had been starting to show its age.
— Russ Cox in a post to the golang-nuts mailing list.
At the end of 2019, go.dev was announced featuring the project’s new branding. With it, came pkg.go.dev, a new site serving not only package documentation, but also module and licensing information. Unlike godoc.org, pkg.go.dev was closed source. But that wasn’t the original plan, according to Cox:
There’s no conspiracy here. The original plan was to open source it, but open sourcing puts pressure on the code base to be reusable in other contexts. Right now the code is only written for one context: the global pkg.go.dev site.
…
But I very much hear all of you who want to see the code that is running on the site and possibly contribute to it, whether it makes sense to run in other contexts or not. I am going to look into that.
About a week later, Cox announced the intentions
to open source pkg.go.dev. The team released pkgsite
a few months later.
There was praise for the open-sourcing, but with it some code quality concerns.
But it wasn’t any of those that prompted one developer to fork the godoc.org repository, gddo
,
before the pkg.go.dev redirect.
go-source
ControversyA useful feature of gddo
is how it renders direct source code links from documentation.
If you are reading the documentation of a function, and want to see how it is implemented,
you click the name of the function and you are taken to the line containing the function signature in the source code repository.
The feature is implemented by parsing a go-source
HTML tag in the repository’s website.
pkgsite
has this feature, but it was not available for everyone at first.
The feature regression prompted developer Drew DeVault
to write an issue.
pkgsite
did not use
go-source
meta tags as they did not support module versions or directories, according to the main Google developer working on this part of the codebase, John Amsterdam.
Instead, regular expressions match particular hostnames which in turn
match software to which pkgsite
knows how to link source code lines.
Curiously, pkgsite
contained unused code to parse go-source
tags; this will become relevant later.
Take the package codeberg.org/jlelse/tinify as an example.
codeberg implements go-source
tags, so source code links are rendered by gddo
.
pkgsite
has no regular expression to match the codeberg hostname, so there were no source code links.
Even sites which use supported software, like Debian’s Salsa
running GitLab, had no source code links because its hostname is
salsa.debian.org
; it does not contain the string gitlab
.
DeVault published a controversial opinion piece and forked gddo
.
To address this regression, Amsterdam proposed a new go-source-v2
tag.
The proposal was discussed but put on hold. Amsterdam noted:
For now, instead of defining a new tag that will require widespread adoption but still not be completely right, it seems best to get the most common sites working by making changes to pkg.go.dev directly, and then revisit the topic when we’ve had more time to think about the right path forward.
There have been changes since my initial investigation.
DeVault has removed, in his words, “unnecessary salt” from the godocs.io announcement.
Development continued. pkg.go.dev now renders source code links to sites serving go-source
tags, just like gddo
always has. The revision was committed just 2 days ago.
gddo
no longer requires jQuery or Bootstrap.js,
thanks to contributor Adnan Maolood.
With both source code and discussion being open, there can be extra uncertainty and a risk of controversy. “Will all this come together in the end?”
For this project at least, all’s well that ends well; the Go community now have two more useful, maintained interfaces to Go package documentation.
]]>io/ioutil, like most things with util in the name, has turned out to be a poorly defined and hard to understand collection of things.
In a series of a few changes, the entire ioutil
package is due to become deprecated
starting from Go 1.16.
Existing code using ioutil
will continue to work;
ioutil
will consist of simple wrappers to new functions which reside in the io
and os
packages.
Initially, a proposal by Cox back in July was approved which saw the move of
general I/O helpers, like ioutil.ReadAll
,
out of package ioutil
and into io
.
Remaining code in ioutil
consisted of OS file system helpers, like ReadFile
.
A few months later, a second proposal by Cox suggested moving those into package os
.
Acceptance of the proposal was the nail in ioutil
’s coffin.
The deprecation of ioutil
comes as part of what will be a significant Go release.
Module-aware mode is enabled by default.
The darwin/arm64
port will be released which means Go will be natively supported on Apple’s new Macs using their M1 SoC.
A new io/fs
package, demoed last year, will make its debut.
Whilst new features tend to get more journalistic coverage, long time Go programmers may be encouraged by this recent deprecation. Relatively thankless work such as this suggests a dedication to keeping the core of the language clean and easy to understand; values that brought so many programmers to the language in the first place.
Migration of code using ioutil
should be straightforward.
Here is an example migration adapted from package wal
in the popular Prometheus project:
package wal
import (
"fmt"
"io/ioutil"
"os"
...
)
func TestLastCheckpoint(t *testing.T) {
dir, err := ioutil.TempDir("", "test_checkpoint")
require.NoError(t, err)
defer func() {
require.NoError(t, os.RemoveAll(dir))
}()
...
We rename ioutil.TempDir
to os.MkDirTemp
.
Now that ioutil
is no longer needed, and os
was already imported,
we have one less dependency:
package wal
import (
"fmt"
"os"
...
)
func TestLastCheckpoint(t *testing.T) {
dir, err := os.MkDirTemp("", "test_checkpoint")
require.NoError(t, err)
defer func() {
require.NoError(t, os.RemoveAll(dir))
}()
...
Member of the Go team Bryan Mills has an open proposal for the go fix
command to automatically migrate deprecated code.
This means existing code using ioutil
may not have to be changed by hand.
Discussion of the proposal stalled over a year ago.
Additional feedback from the proposal review committee may be requested later this year.
[Microsoft Windows (partial) source code and various Microsoft repositories]
price: 600,000 USD
data: msft.tgz.enc (2.6G)
We sent an email to the provided address, solarleaks@protonmail.com asking for clarification and proof that the leaks were genuine. But the mail bounced with a “Address does not exist” message. We reached out to ProtonMail to comment on whether the account was ever registered and will update this article on any reply.
The site’s contents are signed using PGP.
A Hacker News commenter advises the key used is E2C73BC53B9118A0
. This is not available on the pgp.com keyserver.
The domain solarleaks.net
was registered at TuCows just a couple of days ago on 11 January.
Hosting is provided by Swedish hosting company njalla.
The encrypted files are also hosted on mega.nz, which may be an infringement on their terms of service.
Of course, 600,000 USD may be steep for Windows source code; Windows XP source code is available via BitTorrent already.
]]>zhukov@
) has added preliminary OpenBSD support to Open Broadcaster Software (OBS) Studio
for release 26.1.0 and later.
The changes come as part of an ongoing collaboration between the upstream OBS project and OpenBSD developers.
Preliminary OpenBSD support was added in two commits.
One introduced sndio support.
This adds a sndio
plugin which Zhukov advises will provide more reliable, lower latency audio mixing than the ffmpeg
plugin for OpenBSD users.
The other provides basic support such as help evaluating OpenBSD-specific filesystem paths.
A link to the release was posted on Reddit, with a title claiming “full OpenBSD support”.
Bryan Steele (brynet@
) was quick to provide helpful context in a comment:
]]>Note that this is still a WIP and it hasn’t been submitted to the ports mailing list or committed to the ports tree, zhuk@ and others have been working with the upstream. As I understand there are issues that still remain, so “full OpenBSD support” is a bit premature.
We are still working on it… so please wait.
The OpenBSD project has a Do-It-Yourself habit: it writes their own version of popular utilities.
For years, the Apache2, then nginx was included in the base installation until the project wrote its own HTTP server: httpd.
The same may happen with Game of Trees, an OpenBSD developer’s implementation of git.
The DIY motivation varies from program to program.
got
provides is a different user interface to interact with git repositories with fewer, new subcommands.
The same reimagining of the interface is not occurring with openrsync
.
For compatibility with non-OpenBSD rsync servers, openrsync
supports GNU-style longer command-line flags such as --archive
instead of just -a
.
So why does OpenRsync exist?
OpenRsync is written under the OpenBSD project’s preferred ISC-style license.
Its original purpose is for use by rpki-client.
And it seems to only ever be intended for use as a rsync client.
There is no rsync protocol daemon which are often used in mirror sites such as rsync://ftp.nluug.nl/openbsd/
.
What we seem to have is a rewrite of a subset of the rsync client program.
For inclusion into the base installation, a small rewrite is easier to maintain than importing the entirety of Samba’s rsync.
For a rough indication of this effort, sloccount shows that Samba’s rsync is almost 8 times larger than OpenRsync (43000 versus 5500 source lines of code).
Whilst OpenRsync is a work in progress, it is possible to use it
today. The openrsync
program has not been renamed to rsync
, so connecting to a server requires the use of the --rsync-path
flag. For example, the files making up this website are uploaded from
any computer to an OpenBSD server using hugo and rsync as follows:
rsync -av --rsync-path /usr/bin/openrsync public/ ams.olowe.co:/var/www/htdocs/www.srcbeat.com/
As always, for more information see the openrsync(1) manual page.
]]>If it was running macOS 11, I wouldn’t be too surprised. From j5create’s downloads page:
To avoid loss of functionality with the j5create USB™ Display and USB™ Ethernet adapters, we advise all Mac® users to delay updating to macOS® Big Sur 11.
It was running macOS 10.15. So what happened?
j5create’s (or really ASIX electronics’) drivers come in the form of macOS kernel extensions using the deprecated IONetworkingFamily
interface.
Kernel extensions were deprecated in favour of system extensions.
These run in user space instead of interacting directly with the kernel.
Why did Apple make this move? From a Hacker News commenter:
They’re just doing the typical Apple thing of enforcing a “one proprietary port, one licensed plug” policy.
This is too simple of an analysis. The whole story becomes clearer when looking at another strictly open source operating system.
Unusable USB ethernet dongles are par-for-the-course for OpenBSD users. An OpenBSD user will not search the web for “j5create usb ethernet openbsd driver”. Few tend to bother writing drivers for relatively obscure operating systems. But most importantly OpenBSD removed its loadable kernel module interface - lkm(4) - almost 6 years ago in release 5.7. In a sense, Apple may have been late to the party.
So why did both Apple and the OpenBSD developers remove this interface to the kernel? It is, at least in part, in the interest of system safety and stability. Removal of the interface prevents a whole class of errors like the one I encountered installing a USB dongle driver, which ended up in me reinstalling the whole operating system.
The j5create dongle is in the bin. You may think that I would have gone to the Apple Store and bought Apple’s USB-C ethernet dongle. But you’d be wrong. I didn’t think twice before plugging in a $10 dongle I bought from eBay years ago. MacOS recognised it instantly without any driver installation required. How could I be so confident it was going to just work? Apple Magic (tm)? The dongle chipset is on the list of supported chipsets in the OpenBSD manual.
]]>Game of Trees (Got) is a version control system which prioritizes ease of use and simplicity over flexibility.
got uses git repositories underneath, so you can use git and got on the same repository. Why are these developers spending their time on developing this new tooling? The project’s first line about prioritisation is diplomatic, but software is often written to solve a problem. Git, like all software, has problems.
Steve Bennet wrote:
What a pity that it’s so hard to learn, has such an unpleasant command line interface, and treats its users with such utter contempt.
The command-line interface of Git is so confusing that you might think that these manual pages are real.
When you are onboarding yourself to a new codebase you’ll ask “how do I contribute?”. Does the project use git-flow? GitHub flow? GitLab flow? You arrive and sometimes there’ll be a mix of all those. And more. A branch with a fix from John who doesn’t even work here any more. A “dev” branch that hasn’t been updated in a few months because now we deploy direct from “main” (or “master”). I’m met with surprise when I tell people the Linux kernel and git itself is managed totally differently; plain-text patch files are sent via email (more info).
What should we be doing? From the official git project website:
Because of Git’s distributed nature and superb branching system, an almost endless number of workflows can be implemented with relative ease.
Since the tooling itself does not recommend a workflow, we end up relying on large, complicated server-side software to steer us towards one. GitLab requires at least 4GB of RAM on the server it is installed on. GitHub is a huge closed-source service that is a hard dependency of some developer tooling. There’s an opportunity here for software with some sane defaults to come along and demonstrate that you don’t need all this extra stuff for things to be easy to use.
That’s where got comes in. Is got going to change the entire distributed version control system landscape? I wish it would, but it’s not intended to; its target audience is OpenBSD developers. But there’s something happening. Got is being developed (see the mailing list) and sourcehut is profitable. It’s exciting to see where things are going.
This was posted on Reddit by somebody and has some interesting discussion.
]]>srcbeat is the work of Oliver Lowe.
srcbeat is a portmanteau of source, from source code, and beat, from beat reporting.
]]>srcbeat
provides some services at no cost.
It’s our little part of fighting for a free Internet;
one free of mass surveillance and Big Tech cloud dominance.
All servers are based in Sydney, Australia.
srcbeat
provides an encrypted public DNS resolver via DNS over TLS (DoT).
dns.srcbeat.com
All addresses listen on the standard DoT port 853. Our nameservers do no logging and query the DNS root directly.
forward-zone:
name: "."
forward-tls-upstream: yes
forward-addr: 159.196.207.99#dns.srcbeat.com
forward-addr: 2403:5809:a040::1#dns.srcbeat.com
srcbeat
hosts a network time server via NTP
on both IPv4 and IPv6.
time.srcbeat.com