Show Very short

Setting up the new server
Creating cloud instance
Creating HTML-files
Transferring files
From SVN to git
Running .Net apps
Running static sites
Email (very long)
Disk corruption issue
Goodbye to Amazon
What I did not do
Overall impressions
Behind the scenes

June 2021. By Bjørn Erling Fløtten, Trondheim, Norway. License: Creative commons By (CR BY).

Ditching Windows and AWS, diving into Linux and Hetzner

NOTE: There is a lot of text on this page. You can therefore toggle between 3 different presentations. Click on any heading or use link in upper left hand corner.

The story about how I finally migrated my personal server from Amazon AWS / Windows to Hetzner / Ubuntu.

As a result I now save a lot of money and also have a more pleasent server environment to work with. Performance is also dramatically improved.

Part 1: Background, old setup

TLDR; AWS. Windows. Visual SVN. hMailServer. PostgreSQL. Thunderbird. .Net. C#. Visual Studio.

My old setup was quite old, from around 2009. It consisted of:

(Since I wanted management to be as hassle-free as possible I went only for tools with a GUI at that time.)

The server hosted about 7 domains with various personal projects, ranging from ASP.NET 4.5 / Visual Basic 7 to .NET Core 5.0 / C# 9.0.

The only technical problem with this setup was Windows eating up disk space over time, 40 GB is way too small for a Windows system partition. Most of my administrative tasks where therefore related to freeing up disk space (there are surprisingly many places where garbage accumulates).

Apart from the the disk space issue I had no big technical reason to change this setup. I am a great adherent to "if it ain't broke, don't fix it", and also to KISS (Keep it simple stupid).

However, the cost was prohibitive, both the Windows tax and AWS's prices in itself.

Yes, you can prepay and get discounts, something which I actually did from time to time. I also upgraded and downgraded CPU / RAM some times in relation to new projects coming and going. But why can't Amazon give me that discount automatically when they see a stable instance running for years on end? And, why can't I get an automatic discount when my server runs on supposedely long-ago amortized hardware? In short, I dislike not being appreciated as a valuable customer. So, the time was ripe to ditch Amazon.

(I luckily never bought into any other services from AWS apart from EC2. I can imagine situations where you get so closely tied to all their special offerings, which, although well implemented and quite useful, can make it almost impossible to switch cloud provider later. )

I could have moved from Windows a long time ago except for the fact that I really like C# / .Net and Visual Studio.

I like C# because of static strong typing, because of it not "getting in the way" when I want to express something and because of all its aspects borrowed from functional programming which makes creating all kind of transformations more succinct (I never got along to learning F# but the need may never arise since C# gets more and more equivalent functionality).

And I like .Net and Visual Studio because of all the functionality offered. Whenever I evaluate some other programming environment I stumble upon all kind of no-deal situations for me personally. A JetBrains editor and Kotlin as programming language came very close however.

When Mono first arrived I could of course have moved to Linux on the server and now with .Net Core officially supported on Linux there was very little excuse left for me to stay on Windows.

(I do not create Windows Forms applications privately any longer, so Windows Forms missing on Linux was never a problem).

I expected the following advantages of moving my server setup:

I expected the following disadvantages:

My choice of cloud provider was Hetzner.

(Note: There is no affiliation here. I am also sure that there are lots of other quite satisfactory cloud providers that I could have used. )

I noticed Hetzner being mentioned in several Hacker News threads, like (which is a discussion about Mighty) and ('Searching the web for under $1000/month (').

Hetzner's presentation is orderly and understandable, in the "ordnung muss sein" style of the Germans. Their pricing is less opaque than AWS, with monthly amounts clearly stated. The Windows tax is also clearly stated (Windows would be about 50% more expensive in my case).

(Hetzner actually discourages use of Windows, which I interpret as they offering what is best for the customer).

Part 2: Setting up the new server

Note: Almost each separate step has been marked with approximately how long time I used on that step, but I did not do the whole process in one go. It was done over the course of several days between work and family obligations. I also suppose that younger people than me could do this faster, with the same level of starting expertise, as my brain works a little slower now in middle age. Time spent on polishing the log of my progress into this presentation is not included.

Warning: This text is not a technical guide, it is a description of my conversion experience from Windows to Linux. You might get some technical answers here, but the presentation is chronological, not pedagogically conducive to teaching something or solving a problem. My system administration skills are also not as developed as my developer skills (pun not intended), so there is some trying and failing in here. Everything is included in the text, warts and all. Some readers will find some points banally simple, and other readers will not understand some points because I might suddenly take for granted that some technical term is understood by the reader.

I have strived to always end up with the minimum of functionality necessary (packages, shell commands, contens of configuration files and so on), so the installations themselves should be the minimum necessary in order to get the needed functionality and security desired.

Step 0: Creating a cloud server instance.

Total time: Less than one hour (excluding prelimary research)

TLDR; Nuremberg, Ubuntu, AMD Milan Epyc 7003, 16 GB, 160 GB, 20 TB.

(Disclaimer: Note that I am comparing setting up a Hetzner instance in 2021 compared to my experiences setting up an Amazon instance many years ago. This comparision may be somewhat unfair to Amazon taking this into consideration. )

Note: I needed a dedicated instance, for reasons of performance, but Hetzner does have even cheaper solutions you can choose from.

Hetzner first requires you to choose

1) Physical location, 2) Linux distribution, 3) Hardware type,

For Hardware type the choice is between Intel Xeon Gold and AMD Milan Epyc 7003 processors. I chose their CCX22 "dedicated" instance at EUR 34.90 per month with AMD processor (always supporting the underdog). "Dedicated" means that each vCPU instance has its own dedicated CPU resources. I went for CCX22 with 16 GB RAM, 160 GB Disk space, 20 TB Traffic. This is a bit of overkill but I do really dislike encountering constraints like disk space or RAM when experimenting with new projects. I could have chosen a "Standard" instance for under half the price, but I like having my web pages loading quickly.

I chose to pay with SEPA (Single Euro Payments Area) because my bank (Danske Bank) offers such payments free of charge. At least that is what their price list says. It should also be less of a hassle than paying with my private debit card and then filling out an expenses form to my company in order to get reimbursed, like I had to do with AWS.

When adding a new server Hetzner immediately offers to create a firewall which was a straightforward process. It defaults to TCP on port 22 (for SSH) and ICMP. I also added port 80 for HTTP and 443 for HTTPS for my web server, and port 25 for my email server.

Note: Later on I discovered that I never applied that firewall to my server instance, which was somewhat strange because port 80 worked quite OK after I had setup Apache as web server. TODO: Lenke til Apache.

I chose Nuremberg as location (it sounded vaguely familiar), but I suppose any of their EU locations would be ok for me as I am based on Norway and noen of them was particularly close. They offered Nuremberg, Falkenstein and Helsinki.

Then I chose Linux-distribution. The default choice is Ubuntu, and I suppose that was the safe one. I know that systemd is very controversial but at my level of expertise I just do not care. Apart from Ubuntu, they also offer Fedora, Debian and CentOS.

You can also set up extra volumes and networks, something which I skipped. I suppose an extra volume will be easy to create and mount later.

Adding a SSH key was trivial. I also set the server name to something easy, in my case "". With AWS you always get some default gibberish that you never remember and which pollutes later reports and queries. The IP-address was given automatically, No need to ask for this specifically as when I last set up an AWS instance.

All in all the process was extremely simple. No credit card needed, only a verification per email. The Cloud Console of Hetzner is very easy to understand, with simple options, like for instance rescaling (CPU, RAM, disk).

Then came the issue of logging in to my newly created instance...

Adding an SSH key was stated to mean that they would not have to send me the root password in clear text. Which is all well and good, but how to login then? There is a CLI console offered through Hetzner's Cloud Console, but no hint about credentials to use.

I decided to try PuTTY, and login via SSH. Same problem of course. In the Cloud Console, under Rescue, there is a Reset root password option however which I used.

In the end I have to admit to not knowing the purpose of adding an SSH key originally.

Step 1: Installing a web-server.

Total time: A few minutes.

TLDR; Apache.

I supposed the choice was between Nginx and Apache. A cursory search on the internet found no deciding factors relevant for me when choosing between those two (small personal web server). This also includes consideratons when running .Net Core, something which I intended to do.

So, I just did:

apt update apt install apache2 ufw app list ufw allow "Apache Full"

Note: For this and all the following examples of shell commands you will see that I am not using sudo. This is due to me logging into the server as a user with root access anyway when doing administrative tasks, so sudo for me is unnecessary (I do not use the server on a day-to-day basis with a "ordinary" user login).

This just took a minute or two, and I had "Apache2 Ubuntu Default Page" available from my web browser on the ip address

Note how I naturally accept that administration on Linux is best done through the command line interface (CLI).

Although I had mainly used the GUI (graphical user interface) on Windows, I clearly see that the CLI is best on GNU / Linux because examples on the Internet are given as CLI commands (shell commands) and it is inherently faster and more efficient, both to describe and to execute. I had also dabbled with some GNU / Linux earlier, so this was not totally unfamiliar terrain.

Step 2: Creating some HTML-files, choosing an editor (and key bindings).

Total time: Around 4 hours.

TLDR; Using ne or will just edit in Windows!

Creating some HTML files naturally leads to the question: Which text-editor to use.

I had earlier tried vi / vim and also Emacs, but never understood the need for such cryptic interfaces (ducking, having now offended BOTH sides of that debate).

I like to be able to start writing right away, and is accustomed to navigating with the cursor keys, not some CTR-letterkey combination. My favourite editor for text and HTML in Windows is actually Notepad (yes, the inbuilt one!). For C# I use Visual Studio.

I use the keyboard exclusively for editing anyway, so editing in a terminal window should not be a problem. In other words, editors like ne, micro, nano, pico are much more natural for me to use.

The issue was finding an editor which actually supported basic Windows functionality like CTRL-left, CTRL-right for moving one word, SHIFT and cursor keys for marking, clipboard with preferable both CTRL-x,c,v and SHIFT-Delete, CTRL-Insert SHIFT-Insert.

There are some posts on the internet related to this issue, with some suggestions like


but I found nothing working out of the box.

Struggling with basic functionality like using SHIFT and cursor keys for marking I also find gems like this: "ne doesn’t highlight text when you select it, so you’ll have to use your imagination here."
(thanks to for this tip).

The funny thing is also that PuTTY happily marks with the mouse anyway, except that it is not recognized by ne.

I have to admit that pages like this:


are totally off-putting. It is almost like a parody of the problems that GNU / Linux struggles with, in order to get people like me to migrate from Windows.

This page is maybe more helpful:

but still does not give any solution.

In short, I got suggestions ranging from changing some PuTTY settings, switching to a different editor, switching to a different terminal client and making some cryptic Bash adjustments. None if these actually worked.

In the end I just gave up. I will just use the ne editor, with CTRL-b for marking and ALT-f,b for moving a word. Together with HOME and END this becomes acceptable.

Note: Later on you will see that by mapping the Ubuntu disk structure to Windows disk station Z:, I could actually get away without any editing on Ubuntu whatsoever, leaving the whole issue moot.

Note: Later on I also discovered that since PuTTY by default puts marked (by mouse) text in the Windows clipboard, and by default offers SHIFT-INSERT for paste from the Windows clipboard, I could at least copy text to and from my shell login with the help of only PuTTY. This greatly helped with running shell commands and copying back responses for documentation.

Step 3: Transferring files, creating static web site.

Total time: Around 1 hour.

TLDR; Using WinFsp and SSHFS-Win, map Ubuntu storage to Z: folder in Windows

With a Windows server I connect with "mstsc" (Microsoft Terminal Services Client) which uses the RDP (Remote Desktop Protocol). This has the option of mounting my local disk drive to the server, meaning that files on my client PC are immediately accessible from the server. Terminal and file transfer in the same client, easy and simple!

Any hope for this in the GNU / Linux world?

Actually I do not care which way I mount a drive or folder, server on client or client on server, as long as I can easy transfer files to and from.

The former, mounting a server disk on the client is probably better from a security viewpoint, since in that case a compromised server can not influence the client (while a compromised client can always lead to the server being logged into also being compromised).

I ended up with SSHFS-Win which mounts my GNU / Linux folder to my Windows computer. instructs how to install two programs, WinFsp and SSHFS-Win.

This went without a hitch. I mapped the root-folder to the Z:-disk and voila!, I now have the server files available from my client. The mapping stays permanent even when I reboot my PC which is a plus.

Note that mounting this way (server on client) also makes the editor-issues magically go away, since I can now just use my familiar Windows environment for editing files on my Ubuntu server.

I could now populate Apache's html-folder by just copying the relevant files to Z:\var\www\html and maybe also use chmod on the files in order to give the necessary read access for Apache.

Step 4: Moving from SVN to git for private repositories.

Total time: Around 2 hours.

TLDR; Storing in /home/{userName}, accessing via SSH, using TortoiseGit.

I was still using Subversion (SVN) for some private repositories on the old server (with TortoiseSVN as client). I was of course aware that git would be much better on the new server, and I was also familiar with git (but have to admit to liking TortoiseGit as git GUI).

apt install git

on the server was an easy first step.

Then I had to understand how git communicates. explains this quite OK.

In my case I needed a git server only for some private repositories that I am the only user of. (For things that I share with other people I just use a service like Bitbucket.)

(My private repositories are a hodgepodge of all kind of stuff from over 30 years back in time. They are quite big, several GB, and therefore less suited for services like Bitbucket which have size limitations. )

This all meant that I could use SSH access without any special configuration (the server was of course already set up for SSH).

Three easy steps and I was done:

  1. Create a directory for the repository in my home directory on the server.
  2. Use "git init --bare" in that directory.
  3. Clone it from my client PC with "git clone ssh://user@..."

After that it is just copying the files from the old SVN repository.

I used the opportunity to reorganize everything and archiving old documents into ZIP files. This drastically reduced the number of files from many thousands to some hundreds.

I do not care about commit history. I know I could have used "git svn clone" for instance. Instead I will just keep a static backup of the old SVN repository for "just in case".

I got about 30 - 50 mb/s upload rate to Hetzner. The limitation was probably due to my own WiFi connection, and not some limit on their side.

Note: Actually I did the main upload to git some days later, and then I experienced a disk corruption issue which was NOT very pleasent.

Step 5: Running .Net Core applications on the server.

Total time: 6 hours.

TLDR; Eminently possible, no roadblocks encountered.

Proxy server
Adding VirtualHost
Monitoring the app (service)
Publishing the app
Running from web browser
Automating deployment
Next site (problems)
Third time lucky?
Fourth app, feeling like a pro

I decided to start with my ARNorthwind application
This is a pretty simple application demonstration the query language of ARCore (AgoRapide 2020).

I started here:

It looks intimidating at first glance but is actually quite natural to follow.

NOTE: In order to understand the text below it helps to read the content from Microsoft's documentation linked to above.

Step 5A: Prerequisites.

Total time: 20 minutes.

I started with installing .Net itself from these instructions:
(I use version 20.04 of Ubuntu).

I chose to also install the SDK in case I later would want to also develop on the server (you can also choose to only install the runtime).

wget -O packages-microsoft-prod.deb dpkg -i packages-microsoft-prod.deb apt-get update apt-get install -y apt-transport-https apt-get update apt-get install -y dotnet-sdk-5.0

Step 5B: Configuring use of a proxy server.

Total time: 10 minutes.

TLDR; Not necessary!

I added this C# code in Startup.cs in my application:

Note: I am not a great fan of the "using"-statement when it only relates to one line of code as you can see below (there is an inordinate amount of example C# code on the Internet which misses necessary "using" statements, because people forget to include it when they later copy-paste the code).

app.UseForwardedHeaders(new ForwardedHeadersOptions { ForwardedHeaders = Microsoft.AspNetCore.HttpOverrides.ForwardedHeaders.XForwardedFor | Microsoft.AspNetCore.HttpOverrides.ForwardedHeaders.XForwardedProto });

NOTE: This later turned out to be unnecessary, so two days later I removed it.

Step 5C: Configuring Apache, adding VirtualHost

Total time: 1 hour

TLDR; Configure VirtualHost in "etc/apache2/sites-enabled". Adding headers_module and proxy_http_module to the Apache installation.

The instructions from Microsoft applies to CentOS and says to configure in /etc/httpd/conf.d/ which I did not find.

But the "Apache2 Ubuntu Default Page" (/var/www/html/index.html) helpfully says to use etc/apache2. This folder again has sub-folders but apache2.conf helpfully points to etc/apache2/sites-enabled for virtual host configurations.

This I did from my Windows client, since everything on the server is now mapped to Z:

So I went to the folder Z:\etc\apache2\sites-enabled and added ARNorthwind.conf with the following content:

<VirtualHost *:*> RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME} </VirtualHost> <VirtualHost *:80> ProxyPreserveHost On ProxyPass / ProxyPassReverse / ServerName ErrorLog ${APACHE_LOG_DIR}/ARNorthwind-error.log CustomLog ${APACHE_LOG_DIR}/ARNorthwind-access.log common </VirtualHost>

(Port no 5000 corresponded to Properties\launchSettings.json in my Visual Studio project)

Note: The eventual final version of this file can be found here (BitBucket)

Then I tried to execute, according to instructions:

service httpd configtest

which unfortunately respondeded not very helpfully with

httpd: unrecognized service

However, since I am on Ubuntu and not CentOS, it was easy to guess "apache2" as correct service name, that is, I tried:

service apache2 configtest

But "configtest" was not accepted anyway.

More googling led to

apache2ctl configtest

and this error message:

AH00526: Syntax error on line 2 of /etc/apache2/sites-enabled/ARNorthwind.conf: Invalid command 'RequestHeader', perhaps misspelled or defined by a module not included in the server configuration Action 'configtest' failed. The Apache error log may have more information.

Still more googling, saying that "headers_module" is not enabled. I did

a2enmod headers

which instructs to follow up with

systemctl restart apache2

Now Z:\etc\apache2\mods-enabled listed a file headers.load.


apache2ctl configtest

now tells me:

AH00526: Syntax error on line 6 of /etc/apache2/sites-enabled/ARNorthwind.conf: Invalid command 'ProxyPreserveHost', perhaps misspelled or defined by a module not included in the server configuration

which Google tells me is caused by "proxy_http_module" not being enabled so I do:

a2enmod proxy_http systemctl restart apache2 apache2ctl configtest

and voila:

Syntax OK

as desired.

Microsoft's documentation tells me to restart Apache with

systemctl restart httpd systemctl enable httpd

(which anyway would have been apache2 and not httpd in my case)

but I decided to wait until completing some further steps.

Step 5D: Monitoring the app through a service

Total time: 30 minutes.

From my client, I added to Z:\etc\systemd\system a file ARNorthwind.service with the following content:

[Unit] Description=ARNorthwind is a demonstration of AgoRapide (ARCore) with data from Microsoft's Northwind example dataset. [Service] WorkingDirectory=/var/www/app/ARNorthwind ExecStart=/usr/local/bin/dotnet /var/www/app/ARNorthwind/ARNorthwind.dll Restart=always # Restart service after 10 seconds if the dotnet service crashes: RestartSec=10 KillSignal=SIGINT SyslogIdentifier=ARNorthwind User=apache Environment=ASPNETCORE_ENVIRONMENT=Production [Install]

Note: Content above is not the final one. The correct version would eventually become

[Unit] Description=ARNorthwind is a demonstration of AgoRapide (ARCore) with data from Microsoft's Northwind example dataset. [Service] WorkingDirectory=/var/www/app/ARNorthwind # Note: Port number must correspond to the port number in the corresponding ARNorthwind.conf file. ExecStart=/usr/bin/dotnet /var/www/app/ARNorthwind/ARNorthwind.dll --urls "http://localhost:5002" Restart=always # Restart service after 10 seconds if the dotnet service crashes: RestartSec=10 KillSignal=SIGINT SyslogIdentifier=ARNorthwind User=www-data Environment=ASPNETCORE_ENVIRONMENT=Production [Install]

but we have not come to that yet here (Remember that I explain my process chronologically). You can also find this file here (BitBucket)

I then enabled the service with

systemctl enable ARNorthwind.service systemctl start ARNorthwind.service systemctl status ARNorthwind.service

the last one showed an error which I thought was because I had not yet published the app (I had not yet put the actual DLLs and other files constituting the app into the folder /var/www/app/ARNorthwind)

Step 5E: Publishing the application.

Total time: 10 minutes.

This was done in exact the same manner as I did before when publishing to IIS on my AWS instance.

Using "Build | Publish ARNorthwind" from Visual Studio and then copying the files to Z:\var\www\app\ARNorthwind

Step 5F: Testing the application, fixing issues with ARNorthwind.service and publishing.

Total time: 4 hours.

TLDR; Correct User name in .service-file is "www-data", not "apache". .Net is located in "/usr/bin/dotnet", not "/usr/local/bin/dotnet". Be careful with "chmod".

Hoping that everything was in order, I executed

systemctl start ARNorthwind.service

which said to run

systemctl daemon-reload

first. After once more executing

systemctl start ARNorthwind.service

I still got the same error condition from

systemctl status ARNorthwind.service

(mentioned in the last step but not shown). It was marked in red:

(code=exited, status=217/USER)

According to Google (once again) code 217 means that the user (in this case "apache") does not exist.

More googling and I tried with "www-data" as user-name in ARNorthwind.service instead of "apache".


systemctl daemon-reload systemctl start ARNorthwind.service systemctl status ARNorthwind.service

resulted in

(code=exited, status=200/CHDIR)

which means that directory not exists or is not available.

I had not done anything about access rights yet, so I used chmod in order to set the correct access rights for /var/www/app/ARNorthwind, but to no avail.

A check for existence of /usr/local/bin/dotnet however turned up empty.

whereis dotnet
dotnet: /usr/bin/dotnet /usr/share/dotnet /usr/share/man/man1/dotnet.1.gz

pointed to /usr/bin/dotnet so I changed ARNorthwind.service accordingly (as already mentioned, the instructions from Microsoft applies to CentOS, not Ubuntu, so I suppose this is something to expect)

But, I still got the same result

systemctl status ARNorthwind.service
(code=exited, status=200/CHDIR)

I tried executing directly

/usr/bin/dotnet /var/www/app/ARNorthwind/ARNorthwind.dll

and then got an exception from the code in my own application stating that I had missed out on one file from the publish procedure (instnwnd.sql in this case. I had forgotten that I copied this file manually the first time I set up the appplication for AWS / Windows / IIS)

I copied the file, set access rights with chmod and tried again.

Same result.

I tried executing again directly

/usr/bin/dotnet /var/www/app/ARNorthwind/ARNorthwind.dll

which said (among other things)

Now listening on: http://localhost:5000

which looked very well (it means that my .Net application was probably working on the server).

However, ARNorthwind.service was still a problem. It still said

systemctl status ARNorthwind.service
(code=exited, status=200/CHDIR)

In desperation I did "chmod -R 777" on the ARNorthwind folder, but to no avail.

Others had similar problems, like
(no response to this one unfortunately at time of writing, but it is badly formulated of course)

This one
is exactly formulated as my problem, but also without any answer (at time of writing).

I did a long search all over the Internet for a solution. It took about 2 hours and I am not showing the details here, because it was all meaningless.

It finally dawned on me that I did not chmod the app directory, only its child directory ARNorthwind. (I had created app parallell to html in order to distinguish between static web pages served as the default web site and .Net Core (Kestrel) applications which would typically be served under separate domains.)

And when I ran the application directly, that was as the "root" user which of course had the necessary access right because it was the user who created the directory.

So chmod 755 on /var/www/app fixed the issue.

The morale of this problem is that Internet and Google is not always a positive thing. Thinking about all my steps and looking inwards from the start would have been better.

After a little application specific quirk (that is, a quirk in ARNorthwind) regarding access rights for writing data to disk, everything seemed to run smoothly.

That little quirk however left me with 474.900 messages in systemd-journald (journaldctl). I considered deleting them. I felt exactly the same as the originator of this questions:
but the answers to that one clearly explain that it is not worth the bother.

Step 5G: Running the application from the web browser.

Total time: 10 minutes.

I was now ready to restart Apache as per instructions with

systemctl restart httpd

Which of course did not work (the instructions from Microsoft are for CentOS as already mentioned), the correct one on Ubuntu is

systemctl restart apache2

The part in Microsoft's instructions about "systemctl enable ..." I skipped. It did not seem necessary.

I could now try in the browser the link:
and voila! Everything runs smootly.

However, repeated requests from the web-browser showed an increase in memory usage the next time I did

systemctl status ARNorthwind.service

so I noted to myself to look for a potential memory leak in the future, but without much trepidation (memory usage is complicated, better not to worry too much).

Note: This later turned out not to be an issue after all.

Step 5H: Automating deployment

Total time: 30 minutes

TLDR; Using .bat-file from Windows

To wrap it all up, within my Visual Studio folder structure for ARNorthwind I put the ARNorthwind.conf and ARNorthwind.service files inside a new Properties/UbuntuApache-folder.

You can check for yourself in the git-repository for ARNorthwind at

And I also created a (Windows) deployment script DeployFromWindows.bat inside the folder with the following code:

REM Ensure that Build | Publish ARNorthwind was done in Visual Studio. REM REM Ensure that station Z: is mapped to root of the web server REM REM In other words, ensure that the following folders are available: REM Z:\etc\apache2\sites-enabled (for copying of file ARNorthwind.conf) REM Z:\etc\systemd\system (for copying of file ARNorthwind.service) REM Z:\var\www\app\ARNorthwind (for copying of ARNorthwind\bin\Release\net5.0\publish) PAUSE COPY ARNorthwind.conf Z:\etc\apache2\sites-enabled COPY ARNorthwind.service Z:\etc\systemd\system COPY ..\..\bin\Release\net5.0\publish\*.* Z:\var\www\app\ARNorthwind REM Most relevant commands to execute on server now are: REM systemctl restart ARNorthwind.service REM If ARNorthwind.conf was changed: REM apache2ctl configtest REM systemctl restart apache2 REM If ARNorthwind.service was changed: REM systemctl daemon-reload REM systemctl restart ARNorthwind.service REM If this was initial publish, additional steps must also be taken. REM See ARNorthwind.conf and ARNorthwind.service for details. REM Also, ensure that folder \var\www\app\ARNorthwind\Data AND files within have necessary write rights. PAUSE

This is not 100% automated deployment of course, I could have followed up with a similar script to run on the Linux side but since it is sufficient in 99% of cases for me to only do

systemctl restart ARNorthwind.service

I did not bother automating deployment any further.

Step 5I: Next site (which did not go smoothly).

Total time: 3 hours.

TLDR; Problem when using 'UTF-8 with BOM'-encoding for configuration files, messing up with chmod, misunderstanding about Kestrel port numbers with subsequent port number conflicts.

The next day I deployed the next website,
I expected it to go like a breeze (always the optimist) but of course something broke again.

After doing everything like I did for ARNorthwind, just spending a few minutes of time and deploying with DeployWindows.bat as in the last step

apache2ctl configtest

returns with

AH00526: Syntax error on line 1 of /etc/apache2/sites-enabled/ARAdventureWorksOLAP.conf: Invalid command '\xef\xbb\xbf#', perhaps misspelled or defined by a module not included in the server configuration

This looks like some encoding error in the file.

Googling tells me sure enough:
"\xef\xbb\xbf are three invisible junk characters (at least from Apache's perspective) called the Unicode BOM, or byte order mark. In your editor, especially if your editor is Notepad, make sure you're saving your file without a BOM." (thanks to nitro2k01 on StackOverflow).

But the file started out as a copy of ARNorthwind.conf, so this is very unexpected.

cat ARAdventureWorksOLAP.conf

returns just fine.

Same issue in both files:

hexdump ARAdventureWorksOLAP.conf | head -1
0000000 bbef 23bf 5220 6c65 7665 6e61 2074 6877
hexdump ARNorthwind.conf | head -1
0000000 bbef 23bf 5220 6c65 7665 6e61 2074 6877

So deleting ARAdventureWorksOLAP (leaving only ARNorthwind.conf) sure enough leads to

apache2ctl configtest

reporting same problem in ARNorthwind.conf as in ARAdventureWorksOLAP.conf:

AH00526: Syntax error on line 1 of /etc/apache2/sites-enabled/ARNorthwind.conf: Invalid command '\xef\xbb\xbf#', perhaps misspelled or defined by a module not included in the server configuration

Somehow I must have managed to change the encoding when saving with Notepad (I use Notepad on Windows as my primary text editor), ARNorthwind.conf was then copied over when I tested automated deployment and the problem lay lingering because apache had not been restarted.

I looked at some of my Notepad documents to see which encodings I have been using so far (what Save As would show) and discovered everything from 'ANSI' to 'UTF-8 with BOM' to 'UTF-16 LE'.

I just changed back to 'ANSI' and the problem went away.

However, after

systemctl start ARAdventureWorksOLAP.service systemctl status ARAdventureWorksOLAP.service

I got

ARAdventureWorksOLAP.service - ARAdventureWorksOLAP is a demonstration of AgoRapide 2020 (ARCore) with data from Microsoft's Adven> Loaded: loaded (/etc/systemd/system/ARAdventureWorksOLAP.service; enabled; vendor preset: enabled) Active: activating (auto-restart) (Result: signal) since Tue 2021-05-11 11:46:38 CEST; 4s ago Process: 45059 ExecStart=/usr/bin/dotnet /var/www/app/ARAdventureWorksOLAP/ARAdventureWorksOLAP.dll (code=killed, signal=ABRT) Main PID: 45059 (code=killed, signal=ABRT) May 11 11:46:43 bef systemd[1]: /etc/systemd/system/ARAdventureWorksOLAP.service:1: Assignment outside of section. Ignoring.

which looked like the problem also existed in the .service file.

The interesting thing with the "systemctl status ARAdventureWorksOLAP.service" command is that if I had executed it too late, I would have missed out on the actual problem, because executing it after the 10 second reset cyclus I only got:

systemctl status ARAdventureWorksOLAP.service
ARAdventureWorksOLAP.service - ARAdventureWorksOLAP is a demonstration of AgoRapide 2020 (ARCore) with data from Microsoft's Adven> Loaded: loaded (/etc/systemd/system/ARAdventureWorksOLAP.service; enabled; vendor preset: enabled) Active: activating (auto-restart) (Result: signal) since Tue 2021-05-11 11:49:46 CEST; 9s ago Process: 45594 ExecStart=/usr/bin/dotnet /var/www/app/ARAdventureWorksOLAP/ARAdventureWorksOLAP.dll (code=killed, signal=ABRT) Main PID: 45594 (code=killed, signal=ABRT)

that is, the reference to "ARAdventureWorksOLAP.service:1: Assignment outside of section. Ignoring." was missing.

This could have been a real headache to solve, because the actual hint about the real problem is gone after 10 seconds.

Note: Later on I would learn about journalctl which probably would have picked up this anyway. But in general the default 10-second restart of services does not look like a good idea because when something fatal is broken it just results in a constant stream of repeated error messages which can drown out the important initial message.

The same problem had manifested itself in ARNorthwind.service and was easily reproducible.

I got the new ARAdventureWorksOLAP service up and running (internally as a service that is).

But then everything went downhill fast.

ARNorthwind.service would no longer run, and ARAdventureWorksOLAP would not show up in the web browser.

I guessed that the latter problem had to do with conflicting port numbers (it was natural to assume that each .Net service would use its own individual port number when communicating with Apache as the proxy server).

But I decided to concentrate on the former problem first because it was very unexpected. It felt more prudent to go back one step and get what had worked already back in order, before adding new functionality.

So I stopped the new functionality (the ARAdventureWorksOLAP service) and tried to get ARNorthwind (the old functionality) back up an running, but no no avail.

systemctl status ARNorthwind

would only show

(code=killed, signal=ABRT)

without any other reason given.

Clearly I broke something, but I was unable to understand what.

Stopping both services, doing "systemctl daemon-reload" and starting ARNorthwind did not help.

I could however start ARAdventureWorksOLAP, and getting that one to respond to (that is, the wrong URL), meaning I clearly had misunderstood how to configure port numbers when using Kestrel.

(changing port number from 5000 in Properties\launchSettings.json was evidently not the correct approach. Especially because I could find no evidence of that json-file when I looked in the resulting files in the runtime directory being created by "Build | Publish" in Visual Studio).

After some frantic fiddling I figured out that I had messed up with chmod again when I did some tidying up of access rights, without checking that the service would actually restart afterwards. My ow stupidity again.

Now I could get ARNorthwind running again.

And, as always, there where two problems manifesting themselves simultaneously, messing with each other.

The other problem was that I could no longer start ADAdventureWorksOLAP when ARNorthwind was running, the status would show as

systemctl status ADAdventureWorksOLAP
(code=killed, signal=ABRT)

which was natural to assume was caused by the port number conflict mentioned above.

So, I had to google Kestrel and port numbers:

I had correctly assumed what is explained here:
(configuring Apache to point to a different port-number for each application)

but it misses how to change port numbers in the .Net application itself.

Here is an excellently formed question explaining the problem:
with a corresponding excellent answer, just adding (in the .service-file) a command line argument to ExecStart like

--urls "http://localhost:5002"

This means I will not have to introduce a separate configuration file, or even worse; I will not have to mess with the source code of the application in order to solve what is a deployment problem.

I chose to explicitly state this command line argument in both .service-files, no longer using the default of 5000 / 5001.

The morale of this is as always to make configuration changes in small steps, and always restart services afterwards to check that they still function, before continuing.

I also once more got reminded about how an irrational optimist I am, doing something I am sure will take only 15 minutes but which turns out to require several hours instead.

And, .Net Core web has a zillion ways of expressing port numbers...

(I wonder if I should use the word 'alas'? I now get flashbacks to reading Jerry Pournelle's Chaos Manor column in Byte in the 1980's. It was always fun to scan for the first occurrence of 'alas'. Computing has not changed that much in forty years).

Step 5J: Third time lucky? Almost, must upgrade app from .Net Core 3.1 to .Net 5.0.

Total time: 1 hour.

TLDR; Learning to use journalctl to get more detailed log information.

Next site out was, an implementation of AgoBrowse (a web-application for creating and browsing picture books).

Everything went smoothly up to

systemctl status vifero

which showed a failure. But I knew this to be a .Net Core 3.1 app (instead of .Net 5.0 like the two first), so I immediately executed the application directly from my shell with

/usr/bin/dotnet /var/www/app/vifero/AgoBrowse.dll --urls "http://localhost:5006"

which quite clearly stated

It was not possible to find any compatible framework version The framework 'Microsoft.AspNetCore.App', version '3.1.0' was not found. - The following frameworks were found: 5.0.5 at [/usr/share/dotnet/shared/Microsoft.AspNetCore.App]

Instead of installing the 3.1.0 framework I just upgraded the application to .Net 5.0 instead. It is nice to be in control.

In the process I also learned to use journalctl like

journalctl -u vifero | tail -100

in order to get more detailed log information.

But interesting enough there was still no pressing need for using journalctl, which was kind of a good sign i guess.

Apart from some minor details with access rights (again!) the rest was easy.

Step 5K: Installing a .Net application with linked 3-party libraries, feeling like a pro.

Total time: 20 minutes

Up until now I had only installed .Net applications which used functionality only from the framework itself.

Next up was NordÆ, a throw-away-email service which demonstrates capabilities of AgoRapide (ARCore) (NordÆ is closed source as of May 2021)

This application uses the excellent MimeKit and MailKit libraries for (in my case) connecting with POP3 to an e-mail server.

The application was also .Net Core 3.1.0. I upgraded it straight away to .Net 5.0 and tested it locally.

The rest took just 10 minutes, including changing the DNS records for the relevant domains. There were no hiccups at all this time.

I was also now able to type all necessary Linux commands from memory, and start feeling like a pro.

Step 6: Running other (static) web sites on the server.

Total time: 1 hour.

TLDR; DocumentRoot in the VirtualHost .conf file does NOT refer to a file!

Now it was time to test other static web sites.

I chose (a proposed schema language, invented in 2011 / 2012) first as I am very found of that idea (although nobody ever picked it up, and I became busy establishing an IoT company soon afterwards, so the project was left without any attention).

I placed panSL.conf in /etc/apache2/sites-enabled with the following content:

Note: Content is not the final one, see corrections below (Remember that I explain my process chronologically).

<VirtualHost *:80> ServerName DocumentRoot /var/www/static/panSL/panSLPreliminary.html ErrorLog ${APACHE_LOG_DIR}/panSL-error.log CustomLog ${APACHE_LOG_DIR}/panSL-access.log combined </VirtualHost>

and copied necessary files to /var/www/static/panSL/

apache2ctl configtestl

then came back with

AH00112: Warning: DocumentRoot [/var/www/static/panSL/panSLPreliminary.html] does not exist Syntax OK

which was a bit disconcerting since I could do for instance

cat /var/www/static/panSL/panSLPreliminary.html

I double checked access rights again, and it looked correct.

Preliminary googling did not help much.

Out of curiosity I tried

systemctl restart apache2

just to see what was available from in my web browser.

Sure enough, I got "403 Forbidden as" a response.

The problem was of course that DocumentRoot can not be a file. The error message is quite unhelpful in this regard.

An alternative error message like

AH00112: Warning: The directory DocumentRoot [/var/www/static/panSL/panSLPreliminary.html] does not exist

would have clarified the issue straight away.

(I did read something about that during my preliminary googling but thought I did try to change it to "/var/www/static/panSL" and it did not help. I turned out that in the confusion I did the change in a local file on my client instead and since I also misunderstood the second answer on
as a contradiction to the first answer (sometimes I just read too fast), I mentally eliminated this to be the problem and went all over the Internet for other explanations. All this may be just caused by the stupidity of my own brain, and of no interest to the reader of course. )

apache2ctl configtestl

now reported only

Syntax OK

BUT, I could still not browse to my site. Not even if I specifically wrote
being careful with upper case / lower case of course.

The problem now was that I had recursively set access rights for the static folder as 744 (only read) instead of 755 (read and execute).

Read sounded sufficient in my head, as no code is "executed", it is a static site after all. I will not dwell into this, just accepting it.

The last finish was to add to panSL.conf

DirectoryIndex panSLPreliminary.html

in order to be able to only write in the browser.

The final content of panSL.conf thus became:

<VirtualHost *:80> ServerName DocumentRoot /var/www/static/panSL DirectoryIndex panSLPreliminary.html ErrorLog ${APACHE_LOG_DIR}/panSL-error.log CustomLog ${APACHE_LOG_DIR}/panSL-access.log combined </VirtualHost>

Step 7: Rebooting

After reading about stable Unix-based system with uptime of years, but with a lingering boot problem, I of course had to reboot and check that everything was still OK.

(for instance if a service was only 'start'-ed, not 'enable'-d and similar).

I did


waited a few seconds and checked all sites implemented so far. No problem.

Note: Reboot seems to go a lot faster now when compared to my old AWS Windows instance, which always seemed to take forever.

Step 8: Setting up an email server

TLDR; The most time consuming, but also the most interesting part.

Combined mail solution?
Postfix, Sendmail or Exim?
Installing Postfix
IP-address clean?
Testing Postfix, installing Dovecot
Accessing from Thunderbird
Fine tune, SASL, PTR, SPF, IPv6
Handling spam
Email summary

Step 8A: Preliminary research (evaluating some Linux combined mail solutions).

Total time: 3 hours.

TLDR; No, don't. Go for individual components instead.

On Windows I had been using hMailServer (connected to PostgreSQL) for almost 10 years and was quite pleased with it.

On Linux I first had to learn the difference between MTA, MDA and MUA (Mail transfer agent, Mail delivery agent and Mail user agent respectively)

Sometimes there actually are good explanation on like this:

What I wanted was a combination of MTA and MDA. Thunderbird as MUA I intended to keep.

Looking at the Linux alternatives there is clearly much to chose from, actually there is too much to chose from.

What I want is a simple email server (simple to setup, simple to operate, for a handful of users only), and I am sure there are many Linux email servers which fulfill these criteria. BUT, I also want a server that a big proportion of the rest of the world also uses, in order to ensure support. Especially future support. I did not find a simple server satisfying these criteria. Finding tens of alternatives, each with a small userbase, and never knowing which of them will disappear next year, was not very helpful.

iRedMail turned up again and again in mye searches. And top-view documentation like this really helps to sell the product:

It is titled "Major open source softwares used in iRedMail" but the important parts are the diagrams "The Big Picture" and "Mail Flow of Inbound Emails", which although somewhat enthusiastically constructed really helps with understanding the big picture.

But iRedMail was not usable for me due to
"iRedMail is designed to be deployed on a dedicated server, which means you can not have other network services running on the server BEFORE iRedMail installation."
"iRedMail requires at least 4 GB memory for a low traffic production server."
(4 GB was the entire memory capacity of my old server.)

This was clearly overkill and too complicated.

However, other parts of iRedMail's documentation is still very usable, like
which explains A, PTR, MX, SPF, DKIM and DMARC DNS records. I do have PTR and SPF for my existing server, and have reasonable success getting accepted everywhere, but clear instructions like these makes it less of an excuse not to set up DKIM and DMARC in addition.

SSL is also well explained here:
including getting a certificate from Let's Encrypt

But, as said, iRedMail was not usable due to its complexity.

Mailcow, the mailserver suite with the 'moo' (ha ha how funny), looked more reasonable.

This installation guide looked thrustworthy and reasonable:

It explicitly shows how the situation of an existing web server (like in my case) can be resolved. It uses nginx instead of apache (which I have already installed), but that should not matter much. It even has a MUA, so maybe I could ditch Thunderbird after all?

However, compared to hMailServer installation and administration looks EXTREMELY complicated.

I read through the whole "official" installation guide and it shows how much there actually is to worry about.

I also do not much favour web pages which forces me to read something sequentially, with Next and Previous being served page-by-page. Yes, there is a menu, I know, but it was difficult to use and still not satisfactory.

RAM usage is not much better either, 6 GB RAM mentioned as adequate for a SINGLE user.

So, goodbye to the cow and out looking again.

Next one out was Citadel,

"Easy to install. Easy to use. 100% open source. No compromises."
Looks interesting.

On the bottom of the page:

"Best of all, the Citadel project pledges never to be bound by a strangling "code of conduct". We know that social justice warriors destroy everything they touch."

linking to the actual Code of Conduct which is

"Code of Conduct
We reject cancel culture. We reject censorship. Political correctness is toxic -- it is soft tyranny, and it destroys everything it touches.
You are not entitled to preferential treatment just because you believe you are part of a "victim group." You'll be treated as well or as poorly as anyone else, based on your merits as a developer, or as a community member, or as a human being.

Respect is earned. Don't be a jerk."

I like this personally, but, as they say, YMMV.

Instructions look easy, like
with funny footnotes like

"There are no social media links here. Enjoy a friendly Citadel community instead. Or go outside."

As I do not use social media, I liked that.

However, it gets funnier, like here:
"Although we are big fans of open source software, we are not fans of Richard Stallman and his far-left communist tendencies to rename things."

but still they use GPL 3... This is interesting (or actually hilarious).

But also very distracting.

Unfortunately the whole community looks very quaint. Citadel originates from the BBS era of long time ago. They use their own dog food for the support forum, which shows its age.

There is also this very frank assessment here:

Some extracts:

Citadel is in a very active development phase - with a LOT of team testing going on - and a major rewrite of the underlying code and distribution model taking place.
The end result should be a more supportable core application, that is easier to install, maintain and repair, that runs leaner with tighter code.
But I suspect the next version of Citadel will be a major step forward in making this all easier and also more supportable - so I recommend you stick with Citadel.
[don't give up!]
Citadel is one of the *best* mail-server solutions out there. Easy to comprehend, configure, administer and backup.

I decided to continue my search.

Throughout this process I got to more and more appreciate hMailServer. It really is a little pearl of a program, just the minimum that you need, with a minimalistic efficient GUI administrative tool. Some call it spartan because they would like to add something, I would call it perfection because there is nothing that you can remove.

Is there a Linux port of hMailServer?

This thread, "Support for Linux" is from 2006 :(
OK, forget about that.

But, was I thinking in altogether wrong terms?

I started to think like this:

Should I instead set up my own system? I see that the same components appear again and again, like for instance Postfix or Exim (for SMTP) and Dovecot (for POP3 / IMAP). I can start with a minimum suite of functionality, and build out of that as the need arises. With hMailServer for instance I started out without configuring spam-filtering, but added it later when I got tired of deleting stupid email.

This would actually be the original Unix philosophy of small interchangeable components with each component only doing a clearly defined task but doing it very well.

I suppose that hardened Linux veteran readers now start whispering: "Good, I can feel your understanding. Come to the Linux side now."

With that whispering in my ear I decided to try it out.

In my search for a guide to combining individual components, it was interesting to dump into pages like this:
It explains a complete LAMP installation, looks very interesting until I see the following:

"First, let us prepare a system that has a minimum requirement of Debian / Ubuntu version of Linux with at least 256MB of RAM available. Anything less than this minimum RAM will cause lot of problems since we are running a server along especially mysql and webmin requires lot of RAM to run properly."

Well, well well. Not of the most recent origin evidently. It really highlights a dearth of documentation when Google resorts to sending me to such dusty corners of the Internet.

Step 8B: Choosing between Postfix, Sendmail, Exim.

Total time 1 hour.

TLDR; Postfix (but I guess I could just as well have used Exim).

Now that I had decided to go for individual components instead, I had to choose an MTA first.
tells me that

"Sendmail is not dead, but it’s dying. Since 1996, its market share dropped from 80% to 4% of all online public email servers on the Internet."

Well, that was one less to consider.

On the same page:
"For beginner admins, Postfix would be easier to set up than any other MTA."

however, they recommend Exim for exact the same reason.

And then again
"It has never been a good idea to run Exim
there's a widespread belief that Exim is somehow a secure mail server. Exim has never been that. The two secure C-language MTAs are qmail and Postfix, both of which were designed, from the ground up, on day 1, to mitigate the kinds of vulnerabilities ...
Don't run your own mail server. But if you have to, don't run a C-language mail server. But if you have to, run Postfix.
Don't run Exim."

OK, Postfix then.

And some biased confirmation seeking searching:
"Postfix is possibly the fastest growing MTA on the market today.
Postfix is extremely popular because of it's performance, and it's past security history. It is far harder (or almost impossible) to compromise the root user on a server that runs Postfix, than for instance Sendmail or Exim.
Postfix also supports the use of milters, which allow you to use external software solutions to pass mail from Postfix to anti-virus and anti-spam filters.
Postfix also runs faster with less system resources than most other MTAs (or at least, with standard configurations).
Standard configurations are easy to create, but if you need a unique setup, it can be a pain with Postfix. These strengths leave little mystery as to the sudden growth of Postfix as a Linux mail server software solution. Postfix is the default MTA on Ubuntu Linux."

Undated though :)

In reality, I guess it would not matter much (Postfix or Exim). But I really liked the presentation of documentation at the site.

says nothing about Ubuntu (yes, I know, Ubuntu is based on Debian, but nonetheless).

Step 8C: Installing Postfix.

Total time: 1 hour.

I decided to first update all components installed so far with:

apt update apt list --upgradable apt full-upgrade

with response

Need to get 220 MB of archives. After this operation, 92.3 MB of additional disk space will be used.

It was interesting to see that Hetzner participiates in the mirroring, most of the packages came from

The download process was disappointing slow from Microsoft however, for this specific one:

Get:21 focal/main amd64 dotnet-runtime-5.0 amd64 5.0.6-1 [22.1 MB]

After about 10 minutes it had just managed half of this package.

Interesting enough, I accidentally pressed CTRL-C while the slow Microsoft download was in progress, (in order to copy the Hetzner mirror URL above), thus aborting the download, and when I had to repeat the process the download from Microsoft went so fast that I first suspected something had broke.

I then continued with

apt install postfix

Apart from some general information, it also says:

Suggested packages: procmail postfix-mysql postfix-pgsql postfix-ldap postfix-pcre postfix-lmdb postfix-sqlite sasl2-bin | dovecot-common libsasl2-modules | dovecot-common resolvconf postfix-cdb mail-reader postfix-doc openssl-blacklist

this looks reasonable but somewhat overwhelming.

Choosing a database like postfix-mysql, postfix-pgsql (PostgreSQL) is however not something I must do according to
which helpfully says that:

"LDAP and SQL are complex systems. Trying to set up both Postfix and LDAP or SQL at the same time is definitely not a good idea. You can save yourself a lot of time by implementing Postfix first with local files such as Berkeley DB. Local files have few surprises, and are easy to debug with the postmap(1) command."

From the configuration screen I chose
"Internet site: Mail is sent and received directly using SMTP"
"System mail name default to" (my server name)
and that was all.

I now discovered that I never applied the firewall to my server instance in the Hetzner Cloud Console, which was somewhat strange because port 80 had worked quite OK.

I applied the firewall, and tried with telnet to the SMTP port (port no 25) from my Windows client. Of course this failed because my residential internet (4G 'broadband') would have port 25 outbound blocked.

But, when i tried (through an RDP connection) from my existing AWS instance (which have port 25 enabled outbound because it runs hMailServer) I still could not connect.

"systemctl status postfix" says "(active, exited)" and "systemctl start postfix" does not change anything.

systemctl --type=service

however says that I have two services:

postfix.service loaded active exited Postfix Mail Transport Agent > postfix@-.service loaded active running Postfix Mail Transport Agent (instance -)

which looked somewhat reassuring.


systemctl status postfix*

is better to use in order to list status of both services.

It does not take long time for the first spammer to appear:

journalctl -u postfix* | tail -100
postfix/smtpd[67306]: warning: hostname does not resolve to address Name or service not known postfix/smtpd[67306]: connect from unknown[] postfix/smtpd[67306]: lost connection after EHLO from unknown[] postfix/smtpd[67306]: disconnect from unknown[] ehlo=2 starttls=1 commands=3

Amateurs! Not even doing their PTR's right.

(This reminds me of the good old days when a mail server by default usually would be an open relay. Now you definitely get a more secure basic install)

Now I could actually also telnet to port 25 from my Amazon instance, and the connection appeared in the Postfix log. I have no clue why it did not work on the first attempt.

Time to ask Hetzner to open port 25 outbound (SMTP port).

(There is no mentioning of port 25 blocking anywhere on Hetzner's pages. But it is of course natural to assume that port 25 is blocked, even though the firewall administration page of Hetzner's Cloud Console says "All outbound traffic is allowed. Add a rule to restrict outbound traffic.").

So I decided to open a Support ticket, which suggest:
"Server issue"
"Server issue: Sending mails not possible"
Choosing the latter gives this:
"Outgoing traffic to ports 25 and 465 are blocked by default on all Cloud Servers.
A request for unblocking these ports and enabling the send mail option is only possible after the first paid invoice."

Fair enough, some thrust needs to be established.

Invoice information says:
"Invoice information
You currently receive 1 invoice a month.
The invoice will be created on the following day: of the month"

Unfortunately, that would be 20 days to wait.

I instead opened a support ticket and asked for an immediate invoice.

I got a quick response with "we made an exception and just unblocked your mail ports.". Not bad. I suppose it was thanks to me being informative about this being a personal web server and sending the request from my personal email address.
Or, rather, because of when a customer ASKS to be invoiced, he probably sounds very serious, ha ha.

Step 8D: Checking that my IP-address is clean.

TLDR; Almost clean.

Total time: 1 hour.

IP addresses are like underwear, they may be dirty and you do not know who used it before you (technically, regarding the latter I suppose I could look up earlier use somewhere of course).

is a great service. It says whether there is any email spamming associcated with an IP address.

I entered (which is the address Hetzner gave me).

It came out ok with 84 of 86 blacklist services. RATS Dyna and BARRACUDA came out as positives (I have never heard of them). I tested my old address (from which I have sent email for many years), and all 86 came out ok.

There was a TTL with both RATS Dyna and BARRACUDA which looked quite low. 900 and 2100 seconds respectively.
indicates that one reason the address is blacklisted has to do with the PTR being like '' instead of a 'real' PTR pointing to an actual company. but looking up the address directly with service:
gives a "Worst Offender Alert" which sounded more serious (they even said it was so bad I could not ask them to remove my address from the list, because the whole C-class may be involved).
says my address has a "poor" reputation.

"Barracuda Central maintains a history of IP addresses for both known spammers as well as senders with good email practices. The Barracuda Reputation System is a real-time database of IP addresses that have a "poor" reputation for sending valid emails."

Maybe it is just because the address was "unknown" to Barracuda.

I decided not to do anything about this for the time beeing. 84 out of 86 from anarchists on the Internet is maybe not too bad.

Note: Some days later the picture was exactly the same, with the same two services blacklisting my IP-address.

Note: Another tool,, which I used some days later said "Not listed in Barracuda" and "Not listed in RATS-ALL".

Step 8E: Testing Postfix and installing Dovecot (as MDA)

Total time: 2 hours

Dovecot was mentioned so often in connection with Postfix that I decided to go straight for that one as MDA.

Cyrus IMAP could probably also have been a good choice, but I found more recent installation guides for Dovecot and Ubuntu.
somewhat strangely does not start with installation, and they have no "getting started" section. Also, is not linked from the front page, but turns up in searches (it looks like abandoned).
however has a fairly recent installation guide, and also refers to a (somewhat outdated) Community wiki page at

"To install a basic Dovecot server with common POP3 and IMAP functions, run the following command:
apt install dovecot-imapd dovecot-pop3d
By default mbox format is configured, if required you can also use Maildir."

I have no clue what the differences between mbox and Maildir are, and do not intend to find of either. This is a small personal server after all.

(Well, OK, I did read this:
Maildir is better overall because it is more scalable and can't get corrupted so easily. So, if you have trouble figuring out what you should be using and have a choice, choose maildir. But whatever. )

apt install dovecot-imapd dovecot-pop3d

Note: Later on I discovered that I could probably have left out dovecot-imapd because I usually use exclusively POP3. And IMAP is not very well suited for the mbox format.

systemctl status dovecot

reassuringly says "active (running)" and "Dovecot v2.3.7.2 starting up for imap, pop3.".

I also added port 110 (POP3) to my Hetzner Cloud Console firewall. I do not currently use IMAP so I did not open that port. Note: This later turned out to be unnecessary, since I would need port 995 for secure POP3.

I tried to telnet from my client PC to port 110, without any problems.

Now I had to integrate Postfix and Dovecot. This was not clearly explained in any of the pages I had consulted so far.

I understood that by default Postfix will use mbox for the mailbox format, same which I choose for Dovecot, but I did not understand why there had to be a connection here.

Note: Later on I understood that Postfix actually DELIVERS the mail to (in my case) an mbox, from which Dovecot picks it up, making me question why it is not also an MDA in addition to an MTA. But anyway that was later on, continue reading here for my progress:

Googling "integrate postfix dovecot" suggests:
which is WAY too complicated.

explains that

"Data for the mail server’s users (email addresses), domains, and aliases are stored in a MySQL (or MariaDB) database. Both Dovecot and Postfix interact with this data."

which is all clear and well but I do not want a dedicated database engine.

is more down to the point.

It explains the file /etc/postfix/virtual

"The virtual alias map table uses a very simple format. On the left, you can list any addresses that you wish to accept email for, each on a separate line. Then to the right of that you can enter the Linux user you would like to get that mail delivered to."

That sound good, except that I do not have this file. But maybe the point is that I am supposed to create it from scratch.

By the way, the same guide also explains some basic tests that can be done with telnet like:

telnet 25 ehlo some_random_name mail from: rcpt to:

which results in

550 5.1.1 : Recipient address rejected: User unknown in local recipient table

which is all clear and well, since I had not done any of the mapping yet as described above.

I added myself as a new user with

adduser bef

(actually, I did not use "bef" out of security concerns, since bef is publicly known through my email address. The text below uses "bef" however in order to simplify the presentation).

I then created the /etc/postfix/virtual file with the single line

/etc/postfix/virtual bef

and executed

postmap /etc/postfix/virtual

I could now try telnet again as above, but I got the same response:

telnet 25 ehlo some_random_name mail from: rcpt to:
550 5.1.1 : Recipient address rejected: User unknown in local recipient table

Turns out that I overlooked this:

postconf -e 'virtual_alias_maps = hash:/etc/postfix/virtual'

(this adds to /etc/postfix/ the line "virtual_alias_maps = hash:/etc/postfix/virtual")

virtual_alias_maps = hash:/etc/postfix/virtual

and then I did again:

postmap /etc/postfix/virtual

Now the email was received OK (I used telnet as above).

telnet 25 ehlo some_random_name mail from: rcpt to:
250 2.0.0 Ok: queued as 191426042D

But where did it end up? There was no trace in the Z:\home\bef folder.

journalctl -u postfix* | tail -100

said "(delivered to mailbox)" so Postfix looked happy.

But I found it in the folder var\mail as a file bef.

By the way, why is Postfix now called only a MTA? It has actually delivered the email to a specific user on the system, so why is it not a MDA also? You can find directly misleading information about this, like at
"At its core, an MDA is responsible for actually storing the message to disk. Some MDAs do other things as well, such as filtering mail or delivering to subfolders. But it is the MDA that stores the mail on the server."
Before writing this paragraph, just to make sure there was not some hidden mechanism I had misunderstood, I stopped Dovecot with "systemctl stop dovecot" and sent myself a message, and it was dutifully put into my own personal mbox-file by Postfix alone.

There are of course some clarifications available on the net, like
"Any program that actually handles a message for delivery to the point where it can be read by an email client application can be considered an MDA. For this reason, some MTAs (such as Sendmail and Postfix) can fill the role of an MDA when they append new email messages to a local user’s mail spool file."

I sent another email through telnet, and checked how both messages where concatenated into the same file. (now I understood why the mbox-format is not recommended, it is extremely primitive, just a plaintext file with each new message appended to the last one, and therefore very vulnerable to corruption).

Note: See later on for how this becomes a moot point, because the file is continously emptied when I read email since I use POP3.

Time for testing Dovecot with POP3:
gives a nice summary of common POP3 commands. Some of them are USER, PASS, LIST and RETR.

telnet 110

from my Window client PC failed after entering "USER bef" with

Sending of username did not succeed. Mail server responded: Plaintext authentication disallowed on non-secure (SSL/TLS) connections.

so instead I tested from my PuTTY login on the server instead, that is, I did

telnet localhost 110

which went quite well:

+OK Dovecot (Ubuntu) ready. USER bef +OK PASS ...(Not telling you) +OK Logged in. LIST +OK 2 messages: 1 360 2 367

This looks nice. Dovecot now knew about the two messages that Postfix received.

I did "DELE 1" and checked in Z:\var\mail\bef that the first email was gone.

Step 8F: Accessing email from Thunderbird.

Total time: 1 hour.

TLDR; Use "STARTTLS" for Connection Security.

First of all I had to enable more secure communication to the server.

I added port 995 (Secure POP3) to my Hetzner firewall. I ensured through Telnet from my Windows client that something listened on that port (assumed to be Dovecot).

I then setup Thunderbird to access the account with POP3 on port 995 with Connection security "SSL/TLS" and Authentication method "Normal password".

Thunderbird came up with a security exception because
"the certificate is not thrusted because it hasn't been verified as issued by a thrusted authority using a secure signature".
(Remember, I have still not ordered a "real" SSL certificate).

I asked Thunderbird to permanently store this exception.

Under "Server settings" I also unchecked the box for "Leave messages on server (for at most 14 days)", leaving moot the whole issue about mbox or maildir formats on the server, since now the mbox-file would mostly be empty all the time.

I could now receive messages. The next step was sending.

Naturally I could not expect to use port 25 (it would be outbound-blocked on my client PC from almost any network).

So instead I added port 587 (SMTP for clients) to my Hetzner firewall, and set up identical security as for POP3. (Connection security "SSL/TLS" and Authentication method "Normal password").

Note: The correct setting soon turned out to be Connection security "STARTTLS" (see below).

It turned out that Postfix was not listening to port 587.

I changed /etc/postfix/, uncommenting the following lines:

submission inet n - y - - smtpd -o syslog_name=postfix/submission

and executed

postfix check systemctl restart postfix

Now it was listening on port 587, but Thunderbird complained that the connection timed out.

journalctl -u postfix* | tail -10

would show the connection attempt like

postfix/smtpd[165989]: connect from[]

and nothing more. Google helped me towards
where you are recommended to use Connection security "STARTTLS" instead of "SSL/TLS".

Now Thunderbird said:

[Sending email with Thunderbird]
Sending of the message failed. The certificate is not thrusted because it is self-signed. The configuration related to must be corrected.

This was somewhat expected. Thunderbird made it easy for me. I just pressed OK to the error message and automatically got a new dialog where I could confirm this as a security exception.

Step 8G: Fine tuning Postfix, sending email, configuring SASL, PTR, SPF, IPv6.

Total time: 8 hours.

The email server shall also handle email to NordÆ, the throw-away-email service which demonstrates capabilities of AgoRapide (ARCore). This means that is must also host the domains and in addition to (I do also have some other domains for which I intend to add email in the future).

In /etc/postfix/ I added

virtual_alias_domains =

I then added to /etc/postfix/virtual:

/etc/postfix/virtual nordae nordae

(the local user nordae was created by the "adduser"-command).

(actually, I did not use nordae out of security concerns, since nordae is publicly known. The text here uses nordae however).

Note: My setup is what is covered under "Postfix virtual ALIAS example: separate domains, UNIX system accounts" in

It is reassuring to see warnings in the log like this:

warning: database /etc/postfix/virtual.db is older than source file /etc/postfix/virtual

before I ran

postmap /etc/postfix/virtual

Also, since I changed, I did

postfix check systemctl restart postfix

I could now try to send email to addresses like

I first tested this by connecting directly to my server (sending from my own Thunderbird client).

In order to test the whole setup, including the DNS MX-records for the nordae domains (which I of course also had to change) I then tested sending of messages from an "outside" service (Gmail in this case).

Everything went without a hitch.

Next step was sending messages FROM my server to some "outside" address not connected to my server.

[Sending email with Thunderbird]

Now Thunderbird responded with:

An error occurred while sending mail. The mail server responded: 4.7.1 <[some "outside" email address]>: Relay access denied.

This was somewhat both disconcerting and reassuring. Reassuring because it is better to have a strict server setup, setting up an open relay on port port 587 would get me blocked really fast. On the other hand, I would (somewhat naïvely) expect authorized users to be able to send messages, and Thunderbird was configured to authenticate itself when connecting to the server.

(Note that on my old hMailServer setup on the AWS Windows instance I skipped this whole issue by running Thunderbird (through remote desktop) on the same server instance that hMailServer was running on. (Usually, SMTP servers accept mail to remote destinations when the client's IP address is in the "same network" as the server's IP address.)

I did not want to repeat this now however because I now use a laptop exclusively for daily work, in contrast to the situation back when I set up the AWS instance. At that time I used multiple stationary PCs instead (at home and at work) plus a laptop for mobile work. Having Thunderbird accessed through Remote Desktop then made sense, but not now.

Running Thunderbird remote also had some drawbacks, like when sending and receiving attachments (because these had to be moved from and to my PC) and also having to copy links in inbound email messages in order to run them locally. I also did not want to involve the Ubuntu GUI through VNC or similar. So far all server management on my new instance had been done through an SSH console, which I consider far superior to a GUI.

In /etc/postfix/ I saw one commented line:

# -o smtpd_relay_restrictions=permit_sasl_authenticated,reject

SASL was something I had hoped to avoid, but kind of recommends to reuse Dovecot's SASL configuration which sounded better.

"Dovecot is a POP/IMAP server that has its own configuration to authenticate POP/IMAP clients. When the Postfix SMTP server uses Dovecot SASL, it reuses parts of this configuration."

And very much corresponds to the information on the site.

So after reading through all this I decided to give it a try.

First of all, a telnet session to port 25 show that AUTH capability was not announced (after EHLO a "250-AUTH" response like "250-AUTH DIGEST-MD5 PLAIN CRAM-MD5" was missing). So there had not been much help in telling Thunderbird how to authenticate itself anyway because it had not even been given the chance to.

I did the following configuration changes for Postfix in /etc/postfix/

smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous smtpd_sasl_tls_security_options = $smtpd_sasl_security_options

and then executed

postfix check systemctl restart postfix

These settings should be the minimum required for the server to be "safe" in a relay-perspective, while at the same time not imposing client restrictions which I neither need nor want because I will be the only client.

The funny thing now was that

telnet localhost 25

did no longer work. A connection would be established, but no response was received.

I knew that I had not finished the Dovecot side of the setup yet, so of course something was amiss, but this was disappointing.

journalctl -u postfix* | tail -100

tells me

connect from localhost[] postfix/smtpd[194990]: warning: SASL: Connect to private/auth failed: No such file or directory postfix/smtpd[194990]: fatal: no SASL authentication mechanisms postfix/smtpd[194990]: warning: process /usr/lib/postfix/sbin/smtpd pid 194990 exit status 1

I did not expect Postfix to break that easily (so early when attempting to establish a connection).

Note: Later on, after reflecting on this, it kind of makes sense anyway for Postfix to give up. Because it must be able to announce in the "250-AUTH" response what kind of mechanisms are available, and I suppose only Dovecot knows that answer.

OK, continue with Dovecot: In /etc/dovecot/conf.d/10-master.conf under "service auth {" I just uncommented the following:

# Postfix smtp-auth #unix_listener /var/spool/postfix/private/auth { # mode = 0666 #}

and executed

systemctl restart dovecot

Now I could do

telnet localhost 25

and ehlo would result in "250-AUTH PLAIN".

Don't ask me why I tested against port 25 and not 587, but I suppose it really does not matter.

I tried to send the message from Thunderbird again (the one to an "outside" receiver).

Postfix was now happy accepting the message, but of course it was not accepted on the receiving end due to some missing DNS configuration.

In this case it was Gmail and it helpfully tells me in an "Undelivered Mail Returned to Sender" message that

550-5.7.1 [2a01:4f8:1c1c:a7bf::1] Our system has detected that this message does
550-5.7.1 not meet IPv6 sending guidelines regarding PTR records and
550-5.7.1 authentication. Please review
550-5.7.1 for more information to the rescue, or actually

I asked for Reverse lookup of the IPv6 address 2a01:4f8:1c1c:a7bf::1 with result "DNS Record not found" (although it was able to say that the IP address belonged to Hetznre). while for the IPv4 address it found a PTR record with content

Before sending a support request to Hetzner about setting up a IPv6 PTR record as requested by Google I decided to read more about PTR records and IPv6 in order to make sure I did not miss anything.

Googling "dns ptr" leads to
which explains how to interpret PTR quite adequately
explains how you actually setup PTRs (at least for IPv4).

Other pages also stated that typical cloud providers enable their customers to enter these records themselves, so I googled "hetzner ptr record" and found these:
which very clearly described how I could do all this by myself.

Now I could also make a better PTR also for my IPv4 address.

Note: For personal servers like in my case, I always wondered why do things need to be so complicated. Why can it not be sufficient for a receiving mail server to do an ordinary DNS A or AAAA record lookup based on what my email server sends in the EHLO?
That is, when my server sends "EHLO", would it not be sufficient for the receiving server to lookup "" and ensure that the IP-address matches?
They same principle also applies to checking content of MX-record for the domain. Is this due to performance? I accept that for most setups this would probably fail, since for a typical setup the "" A-record would most probably point to the web-site (a web server), with the outgoing email server being on a separate IP. In other words, most attempts to do such checks would be futile, hampering performance.
But it would really have made things easier for small hobbyists like myself.

I set up proper PTR records (through Hetzner cloud) for both IPv4 and IPv6. For IPv4 I changed from "" to "". and for IPv6 I added "".

I also modified the SPF record (through my domain registrar) for to
"v=spf1 ip4: ip6:2a01:4f8:1c1c:a7bf::1 -all"

Note: I later discovered that I could probably have used
"v=spf1 mx -all"
instead (meaning that only hosts listed in the MX records are allowed to send emails for my domain).

I also found this really nifty tool
It receives messages from my mail server and gives them a spam score, with details about SPF, DKIM and DMARC. I was going to use this tool a lot as you see below.

It is even smart enough to tell me things like
"You recently modified your DNS, please do a new test in 12 hours."
with details like
"Your old record:
v=spf1 ip4: -all
Your future record:
v=spf1 ip4: ip6:2a01:4f8:1c1c:a7bf::1 -all"

Note: You can make use yourself even if you do not run your own email server. You do everything from your mail client, meaning you can test how your email provider scores in relation to spam.

This tool
helps me test my IPv6 setup.

It was especially useful for me.

Hetzner gave me "2a01:4f8:1c1c:a7bf::/64". At first I did not put much notice to the /64-part, naïvely thinking I only had to enter "2a01:4f8:1c1c:a7bf::" as the AAAA record, but that would be the same as "2a01:4f8:1c1c:a7bf:0:0:0:0" while my server actually has "2a01:4f8:1c1c:a7bf:0:0:0:1" ("2a01:4f8:1c1c:a7bf::1"). Actually testing the AAAA record with this tool uncovers such errors.

You can also send email to services like and get a report back.

I also learned to use ping in Windows with the -6 argument for pinging an IPv6 address, like

ping -6

but never got it to work from the network of my laptop (Telenor Mobil 4G). From ipconfig it does not look like I have IPv6 on my laptop. I could not connect with PuTTY either towards "2a01:4f8:1c1c:a7bf::1".
It is disappointing in 2021 to not have IPv6 as default from your ISP.

I also played a little with this one
which looks up the authoritative DNS nameserver for an IP address.

After all this tweaking, Postfix was now able to send messages to Gmail but they did not appear in my Gmail inbox. This was not surprising as did not give me a perfect score, and I would also expect Gmail to quaranteen messages from an unknown server for some time. I had however just SENT a message from Gmail to my Postfix account, but I suppose that does not count much in Gmail's anti-spam mindset.

I decided to wait one day before experiencing further.

The next day I tested with again and got a much better score, 9 out of 10. The only minus was due to not signing messages with DKIM. It also warned about missing DMARC record.

Gmail would now receive my messages but unfortunately was putting them in the Spam folder. This had been a recurrent problem with my old hMailServer on Windows set also, but usually sorted itself out after some communication to and from with new Gmail recipients. For sending cold emails this would of course not be acceptable however.

I decided to look into DKIM to see if I could improve the situation (see next step).

Step 8H: Setting up DKIM and DMARC

Total time: 3 hours

TLDR; Using OpenDKIM although it looked old.

By now it was quite clear that email would be the most time consuming task with regards to my server setup. It was not very surprising however, email IS complicated due to the issue of spam.

Interesting enough, googling "dkim postfix" did not point to the documentation at, so I had to google "dkim postfix".

This leads me to (Postfix Add-on Software)
which lists
"OpenDKIM MILTER plugin for Domain Keys Identified Mail." at

I also read about "Sendmail version 8 Milter (mail filter) protocol" at however looked extremely old, with source code hosted in SourceForge (!) and latest release in 2015. also links to
but this was even older.

Maybe there was a reason for Google not wanting to link to about DKIM?

I decide to ditch "" in my query and instead went for
"How to Set up SPF and DKIM with Postfix on Ubuntu Server"

Unfortunately, this is extremely long and also covers DKIM verification of incoming email. For incoming mail my intentions was to eventually set up a general spam filtering service, not configuring the individual components of spam filtering myself.

There are much shorter guides like
but that was old again (from 2013!).

This one is from 2014:

This did not look good.

although complicating things by involving SPF is quite recent (2020).
I supposed I could extract only the DKIM signing relevant parts.

Eventuelly I did one more last search, and found the ideal guide:
from 2020.

The Quickstart section starts reassuringly with
"The quickstart instructions in this section describe setting up a minimal, but functional installation of opendkim for signing and verifying, integrated with Postfix. This is the five-minute version of opendkim configuration for the impatient."

Just what I needed. KISS.

sudo apt install opendkim opendkim-tools sudo -u opendkim opendkim-genkey -D /etc/dkimkeys -d -s 2021

In /etc/opendkim.conf I did:

Domain KeyFile /etc/dkimkeys/dkim.key Selector 2021 Socket inet:8892@localhost

(Note: Not final version, see two corrections below)

systemctl restart opendkim

resulted in

Job for opendkim.service failed because the control process exited with error code. See "systemctl status opendkim.service" and "journalctl -xe" for details.

(The experienced user have spotted my mistake by now, but I am going to elaborate a little bit now)

journalctl -xe

gave me the following:

/etc/dkimkeys/dkim.key: open(): No such file or directory

of course, since I should have written

KeyFile /etc/dkimkeys/2021.private

Actually, the error message "/etc/dkimkeys/dkim.key: open(): No such file or directory" was quite well hidden within lots of "noise" related to restart attemps of the opendkim service. This could potentially have been difficult to resolve, showing how easy it is to stray away from the correct path when following some instructions.

Note: The key name "2021" reflects a practice of regularly renewing keys, so for instance next year I should generate "2022" and so on. Some guides recommend changing keys monthly, using names like "2021_05", "2021_06" and so on. Something tells me that I am "never" going to change my key however.

Anyway, after I corrected /etc/opendkim.conf and did

systemctl start opendkim systemctl status opendkim

I now got "active (running)" as status.

The next step was publishing the corresponding DNS record "" with content from /etc/dkimkeys/2021.txt:

2021._domainkey IN TXT ( "v=DKIM1; h=sha256; k=rsa; " "p=MIIBIjANBgkqh..." "...oO87iwIDAQAB" ) ; ----- DKIM key 2021 for

from which I extracted

"v=DKIM1; h=sha256; k=rsa; p=MIIBIjANBgkqh...oO87iwIDAQAB"

which I entered as TXT record for "" with my domain registrar.

I then tested this with

opendkim-testkey -d -s mail -vvv

which unfortunately did not work:

opendkim-testkey: using default configfile /etc/opendkim.conf opendkim-testkey: key loaded from /etc/dkimkeys/2021.private opendkim-testkey: checking key '' opendkim-testkey: '' record not found

Where did it get "mail" from? I literally called it "2021".

(The experienced admin gets another chance to laugh now).

Before solving this local issue I first tested my public DKIM record with
Short version of its conclusion was "This seems to be a valid DKIM Record.". It also gave lots of interesting details.

So at least that part was solved correctly.

But what about "mail"?

As often before, I went all over the Internet instead of using the information right in front of my face.

It was of course I myself who specified "mail" as "selector" when I did

opendkim-testkey -d -s mail -vvv

The correct is to use

opendkim-testkey -d -s 2021 -vvv

which resulted in

opendkim-testkey: using default configfile /etc/opendkim.conf opendkim-testkey: key loaded from /etc/dkimkeys/2021.private opendkim-testkey: checking key '' opendkim-testkey: key secure opendkim-testkey: key OK

That was better.

To be fair to myself the guide at is a bit inconsequent because it uses "2020" under "Quickstart" and "mail" under "DNS Configuration"

The next step was to configure Postfix.

I added to /etc/postfix/

smtpd_milters = inet:localhost:8891 non_smtpd_milters = $smtpd_milters

(sharp eyes, look out above)

and then

postfix check systemctl reload postfix

(reload is better than restart I learned, because it does not terminate the process, only asks it to reload the configuration files)

I then sent a message to or rather, I tried to send a message, but then something had broke with my Thunderbird setup (at least I believed):

[Sending email with Thunderbird]
Sending of the message failed. An error occurred while sending mail: Unable to establish a secure link with Outgoing server (SMTP) using STARTTLS since it doesn't advertise that feature. Switch off STARTTLS for that server or contact your service provider.


telnet 587

with "EHLO x" resulted in "250-STARTTLS" in the response. So that was strange.


journalctl -u postfix* | tail -100

gave me

warning: connect to Milter service inet:localhost:8891: Connection refused

(as always, suspect your last modification, however far the new problem appears to be from it)

And of course (as the sharp eyed reader alread has sniggled at), in /etc/opendkim.conf I wrote port number 8892 instead of 8891.

I changed it to port 8891. Final version of /etc/opendkim.conf thus became:

Domain KeyFile /etc/dkimkeys/2021.private Selector 2021 Socket inet:8891@localhost

I did

systemctl restart opendkim

and tried again.

Now the message was sent, and the response came back OK.

I also tested with which came back with a perfect score (10 out of 10).

It said I still missed a DMARC record (which I of course did). And since I now had both SPF and DKIM it was time to setup DMARC.

But first, I would like to try Gmails spam filter. And voila! There it came, straight to my inbox. I was now sure that my new setup would become better friends with Gmail than my old one (with hMailServer and Windows). The old one as already mentioned, missed DKIM and DMARC and also had a non-optimal PTR (not pointing to

I read through in order to understand DMARC better.

DMARC is a bit vague in what it promises. I suppose that is also the reason why gave me full score without DMARC and Google also let me out of the spam-folder without it.

PTR, SPF and DKIM is much easier to understand.
PTR peels away the most primitive spammers who do not even bother with setting up one basic DNS record for their server. SPF says who are allowed to send on behalf of a given address, making forgery (false from-addresses) over SMTP more difficult. DKIM makes forgery in general more difficult by crypographically signing the whole message.

Regarding DMARC, contains a lot of text. The gist is, as I understand it, that:

1) DMARC protects the reputation of the sender, by explicitly stating that messages not confirming to PTR and DKIM should be discarded. In other words, a random person can not just produce or procure an (unsigned) email message and say it came from me, because I can point to DNS and say that, no, according to DMARC all messages from me are supposed to be signed.

2) DMARC helps receivers (mailbox providers) to keep users' mailboxes free of incoming fake email.

In case of 1), I do not believe myself to be noteworthy enough to impersonate.
In case of 2) I honestly do not see the difference that it makes.
With or without DMARC, the presence of SPF and DKIM records should be enough information for a receiver in order to decide how to classify an incoming message.

It is also interesting to see that states that:
"If you have not yet deployed SPF or DKIM, we do not recommend implementing them at the same time as DMARC. Change only one parameter at a time and start by DMARC first because of its reporting capabilities."

This is exactly the opposite of what I had read so far. In general it looks like stresses the reporting capabilities.

But anyway, in my case I set up this TXT record:
"v=DMARC1; p=reject; sp=reject;;; fo=1; pct=100; aspf=s; adkim=s;"
with the help of very clear instructions on

("p=reject" is the important part, "sp=reject" means reject also for sub domains (I am a bit unsure about the value of that one in my case), "rua" and "ruf" is just out of curiosity (will I ever get any reports back), "fo=1" means generate a DMARC failure report if any of SPF or DKIM fails (instead of default which is when both of them fails), "pct=100" means filter all my messages and finally "aspf=s" / "adkim=s" means strict handling of From-header (everything is in my case, no or similar)).

Note: Regarding rua and ruf, Gmail diligently started sending me reports every day. now stated
"Your message failed the DMARC verification" with details
"The DMARC test failed but we didn't find any obvious reason why. If you recently modified your DNS, please wait a few hours and then test again."
but it also showed the correct content of my TXT record so I would have expected it to pass (maybe they take into account some propagation issues though).

I decided to check again the next day.

Note: Some hours later stated "Your message passed the DMARC test".

There was also once something called ADSP, but
states that
"ADSP was adopted as a standards track RFC 5617 in August 2009, but declared 'Historic' in November 2013 after '...almost no deployment and use in the 4 years since...'".
So I forgot about that one.

It is also worth mentioning that Google offers a lot of information and functionality for bulk sending to their users (Gmail) for instance via this link:
but I never found any need for this information.

I also see that other big email providers like Microsoft (with also has support functionality related to sending messages through their services.

In general it looks like all parties are interesting in keeping email working despite the huge and complicated issue of spam. Although email may not that important for younger people anymore, it still has its uses.

Step 8I: Enabling "plus" adressing.

Total time: 1/2 hours.

With hMailServer I used underscore, _, as a delimiter for plus addressing, that is, I could use "bef_" followed by any number of characters as an address, like for instance

In other words, I could then give out individual addresses to each organization that I communicate with on the web. This has a lot of advantages: Security, filtering of incoming messages, anti-spam and so on.
explains the concept in more details and warns about a little caveat with Postfix if you use forward addressing.

The feature is enabled by default in the Postfix setup, but using the plus sign, +, instead of underscore. I like underscore better because it is not too obvious, and we are only a handful of users on the server so nobody is going to bother me about trying to use underscore for real in their username or similar.

I edited /etc/postfix/, changing "recipient_delimiter = +" to

recipient_delimiter = _

and did

systemctl reload postfix

This was actually the point when I felt confident enough to change the DNS MX record for incoming messages to my main domain to my new server. Hitherto I had only experienced with incoming messages to some throw-away domains that I have.

I was now waiting for the spam to appear...

Step 8J: Handling inbound spam (the easy way).

Total time: 1/2 hours.

TLDR; I just let Thunderbird thrash all messages with "Received: " header containing "unknown".

It was now time to look at inbound spam.

I am not obsessed about keeping my inbox folder absolutely free of spam. As I do not communicate that often by email, a few spam messages now and then actually tells me my email server is working and receiving email.

I now however got an amount of about 20-30 spam messages per day which was a little too much.

Some quick investigations show that almost all of the spam looked like this:

Some Received: headers
Received: from (unknown []) Received: from (unknown []) Received: from (unknown [])

That is, with "unknown" before the IP address.

While from serious senders it looked like:

Some Received: headers
Received: from ( [IPv6:2607:f8b0:4864:20::849]) Received: from ( []) Received: from ( [])

I decided to try to implement a filter based on "unknown".

First I decided to make sure I had understood correctly what it meant:
I googled "postfix received from unknown".

This for instance (although quite old, 16 years) explains it well:
"'unknown' results when PTR and A lookup do not lead to the same IP address."
(which of course was what I expected, going through all the motions earlier when setting up my server, but it is nice to get some confirmation before continuing).

So in Thunderbird I just added a message filter "Received contains unknown"
(I had to select Customize... in order to use the Received: header instead of the more usual Subject, From, To etc)

I then moved all spam messages (put earlier into the thrash folder) back into the Inbox folder and ran my new filter. Almost all of the spam messages went straight back into thrash, with no false positives.

The only remaining message was one in Norwegian(!) with a big-titted lush woman offering me massage. It was sent from It had quite nice pictures in it, so not that bad actually.

I decided to use only this extremely simple spam filter for the time being.

Note: There is a weakness in how Thunderbird implements such a filter. It looks at ALL "Receive:" headers, instead of only the topmost on (which is the only one I can actually thrust, because that comes from my server). So if a message has a complicated route, with multiple "Receive:" headers, with one of them containing "unknown", my filter may falsely flag a message as spam. However, organizations with such bureaucratic ways of sending messages are probably not those I want to hear from anyway. It would typically be a marketing email for instance.

Step 8K: Email summary.

In general implementing email was more pleasant then expected.
Default configurations parameters are quite sane, and the software was mostly used with the default installation. Any issues were quickly resolved through some googling.

Yes, it took an inordinate amount of time, but the time was spent learning, not being frustrated about difficult to grasp concepts or obscure bugs.

Step 9: Disk corruption issue.

TLDR; File system went into read-only mode. Fixed with fsck.ext4.

Some days after setting "everything" up I decided to upgrade the nordae service to also use the new email server (Postfix) instead of hMailServer on the old Windows instance.

This was just after I had uploaded the bulk of my files to the git repository (about 3 GB of files).

Due to SSH issues (certificate signing issues) when connecting with POP3 on port 995 in the NordÆ application I had to modify the code somewhat. After restarting the nordae service I saw a huge amount of exception messages. I decided to stop the service with "systemctl stop nordae" and due to private circumstances I was not able to fix the issue straight away. I therefore also decided to do "systemctl disable nordae" but was then met with

systemctl disable nordae
"Failed to disable unit: File /etc/systemd/system/ Read-only file system"

An experienced Linux admin would immediately recognize what this is about, but not me. I therefore led myself on a little wild chase involving Postfix (because my Thunderbird client became unable to do both POP3 and SMTP) and Dovecot, including a Dovecot error

journalctl -u dovecot | tail -100
"pam_unix(dovecot:auth): Couldn't open /etc/securetty: No such file or directory"

which led me here" and a suggestion to change /etc/pam.d/common-auth
which I tried to do with the nano editor
which said

[Saving /etc/pam.d/common-auth]
"[ Error writing common-auth: Read-only file system ]"

Now I finally understood that the problem was not my code, but some serious issue with the server. So I instead googled "read only file system" or something similar.
explains it.
"Read only file system" means safe mode because some inconsistencies has been discovered.

fsck.ext4 -f /dev/sda1

resulted in messages like

/dev/sda1: recovering journal Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information

and then messages about:

Block bitmap differences ... Free blocks count wrong for group ... Inode bitmap differences ... Free inodes count wrong for group ... Free inodes count wrong

I said Yes to fix everything and I believe the problem went away, but could not check thoroughly until two days later.

Such issues are NOT very reassuring. The load was minimal, nothing special was done, etc. I will leave Hetzner with the benefit of doubt about this, but if it arises again I will have to reconsider the whole project.

Part 3: Saying goodbye to Amazon, last kick from the discarded Windows server

TLDR; Windows threw a tantrum. PostgreSQL and hMailServer would not start after reboot.

As this project neared its end, I had to do some troubleshooting with my old AWS Windows instance. Logging in via remote desktop suddenly stopped working, I was met by a blank screen. The server itself seemed to be up and running.

I had to log into EC2 in order to reboot the server. But simple Reboot did not work. Instead I had to do Stop, then Start. EC2 reported "2/2 checks passed" but I could still not ping the server. A few minutes of frustration passed before I remembered something about my "Elastic IP" needed to be reassociated with the server instance after such a restart.

And then hMailServer administrator failed with some obscure

[Launching hMailServer administrator]
Retrieving the COM class factory for remote component with CLSID {D6567EF8-0A6C-48E7-9288-A2463123C2F3} from machine localhost failed due to the following error: 8007042c.

Windows services said that the hMailServer service was not started.

Trying to start it resulted in

[Starting hMailServer service]
Error 1068: The dependency service or group failed to start"

which again reminded me about how my hMailServer setup depended on PostgreSQL which also had not started (although there where information in its log about it having started...) which again I remembered was solved by killing all instances of postgres.exe in Task manager.

Now I could finally start PostgreSQL and hMailServer again.

I wonder if Hetzner / Ubuntu etc. will give me similar frustrations?

Part 4: What I did not do

TLDR; ASPX, automate, containers, https.

I had to abandon the original application (online implementation of panSL), since I believed it would be too much work converting it to .Net 5.0. It is an old .Net ASPX application and ASPX is something I do not want to touch again. I was never comfortable with it, it tried to imitate the Windows Forms experience with stateful sessions, something which is inherently not suitable for the Internet and HTML.

I did not try to automate the setup of my various applications. I suppose it would have been quite straightforward.

I did not try container technologies, like Docker, for my .Net applications. I do not think it would be of much use in my case, since everything is supposed to be deployed on the same server anyway.

I did not yet implement https on my web server, but I intend to do in the future.

Part 5: Overall impressions

TLDR; GNU / Linux uses few resources. CLI better than GUI. PuTTY / SSH better than mstsc / RDP. Overall very satisfying.

In the end I had used under 10 GB of the disk space on my server instance (of total 150 available). I was also using under 2 GB of memory. This are much smaller figures compared to what my old Windows instance was using. One could say that I went for a too expensive instance (160 GB / 16 GB), but I like having space to grow.

I have dabbled regulary with Unix-similar systems since 1991 (anyone remember Minix?) but never taken the real plunge before. That is, I had never set up a server from scratch with such a diverse range of services. The experience was kind of as expected, lot of dirty work, googling and intricate details but once up and running it looks very stable.

Administering through the CLI is a relief after the endless pointing and clicking that is default in Windows (PowerShell never took off on Windows, although an excellent idea in theory I still associate it with something slow and complicated).

Logging in to a SSH console with PuTTY also goes much quicker than remote desktop (RDP) which always seems to take forever to initialize.

Was it worth all the time I spent on it? As a form of education, yes, absolutely. I do now understand Linux issues much better than before. If I had done this only as a one-off project, never intending to touch Linux again, it would of course have been a total waste of time.

I must also admit to a certain satisfaction in being independent.

But regarding privacy, keeping my personal data away from Google is of course hopeless because I communicate a lot with people who use Google anyway (including my own company). So I suppose Google knows everything about me anyway. If everyone could run their how server however... Well, dream on :)

Part 6: Behind the scenes

TLDR; Notepad. Self contained HTML page. License CR BY.

This document was coded "manually" with Notepad (!) on Windows as editor.

The HTML page is entirely standalone (self-contained).

In the process I also refreshed some basic CSS and Javascript knowledge, as I am not using these technologies on an everyday basis.

See CSS and Javascript code under the [body] tag for more details.

June 2021. By Bjørn Erling Fløtten, Trondheim, Norway. License: Creative commons By (CR BY).