We should have a build script

We should have a build script.

Personally when I see interesting open source projects I’ll lose interest if there’s a large list of dependencies & setup. Requiring an IDE to run the project is a bit silly. The build script can include tasks to create a database & import test/dummy data.

It’ll also make integration with Docker easier (I know we’re not using Docker, but some people like to build/run new projects in Docker, before deciding if they want to install all the bloat/dependencies on their own computer)

I’ve built plenty gradle & ant scripts in the past so I’m open to learning Nuget and starting on this if it’s accepted.


You don’t need an IDE. Just execute dotnet run in the src/WebUI directory.

The annoying thing is that you have to setup a postgres database first, we should probably default to Sqlite. I am planing to write better instructions for Windows in the future, so far I have created instructions for Linux only.


We can have the build script setup a database. Just to get things going if they don’t set one up themselves. Ideally the auto-generated database would match the live site’s datbase, without the data.
However the user will need to install postgres.

1 Like

I do agree, that setting up the database is annoying. However, I don’t think that a script is the solution, it would be better to simply fallback to Sqlite if no configuration is provided (with a big warning attached obviously). Then the database could be created automatically.

I could make a pull request for that, but I am currently working on the Windows instructions.

I’d like the local environment to match the live one as closely as possible. I don’t think switching to SQLite is solving any issues.

setting up the database is annoying

It’s not only a tedious task that can be automated, it will most likely need to be automated for integration tests


As much as I would like that (because then I might be able to contribute a little—SQLite being the RDBM I have some solid experience with) the two systems use significantly divergent dialects of SQL.

Typing, joins, triggers are all pain points.


We use Entity Framework Core this is essentially an abstraction of SQL. Internally all of the SQL queries are still executed but the code cares very little about these details.

The underlying implementation could be swapped out with a single line of code, right here.

I’d feel far more comfortable with PostgreSQL (or at least MySQL) behind the scenes than SQLite, even for testing. I have some experience with this (with Django, but same concept of an ORM hiding the details but the details still mattering quite a bit). It should not be a big deal to spin up a local PostgreSQL - I’ve done this plenty with MySQL and PostgreSQL shouldn’t be much harder.

You can never fully escape from SQL. When a data-related issue occurs the first thing is to check is the SQL X database framework is spitting out.

It’ll be much more annoying to debug production issues if some edge case works differently locally (unbeknown to the database framework magic).

Personally I don’t care if we use Mysql, MsSQL, postgrest etc I’d just like local environment to match the live environment.
I don’t understand what issue we’re solving using SQLite locally, especially with a framework managing our SQL


In the olden days, using SQLite locally for testing sometimes made sense. But the reality is that we all have in our desktops (or laptops) machines more powerful than servers from a few years ago, and a small PostgreSQL (or almost anything else) can run just fine locally. With MS SQL (or any other non-open database) it would get a bit more complicated, but with MySQL or PostgreSQL no big deal. PostgreSQL is our implementation choice (and I agree with it), so use it locally/development/testing too.

I don’t have experience to draw from, only about the last week; that was just a spontaneous idea.

However, the three commands that are currently used for this can be run manually without a script. It would be overkill to introduce a script for that.

The idea is to have a single command to have a working copy of the site. Also again for integration testing. Manually recreating the database before running tests would be silly.

Also 3 commands may seem small, but not so small when you’re new to postgres and have to google errors / double check typos for a project you’re still unsure about.

Currently, the integration tests use a different database anyways.

I am just worried that the script would create more problems than it solves. Sooner or later a script will become necessary, but at the moment we can get away without it, that is my opinion at least.

Isn’t the .csproj file the build script in this case?

No, .csproj is just a file Visual studio uses & creates. I think it’s used by the IDE for structuring/organizing files. For example, the files in the project view of the IDE is different than what’s actually in the directories.

A build script is something that builds the application, such as setting up all dependencies, optionally running tests, doing LINT/other checks creating databases etc. It’s something that can be run from the command line, using gradle/ant/Nuget/npm/whatever.

It was in ASP.NET. In ASP Core, a file is included if it’s in the folder.

It also configures a couple settings, like which language version to use, default namespaces and
what to run after, before, or at several points during the build process.



Looks like this can all be done using MSBuild tasks

If we consider MySQL, we should also consider the similarities and differences of using MariaDB instead. It’s basically MySQL, with slight differences (but for most people, they’d never notice a difference), and instead of being controlled by Oracle, anyone can submit patches.

Please forgive (and educate) me if this is a naive question, but what is the benefit to testing with a DB other than the one we’re going to deploy with? What does that allow us to test more (easily, efficiently, whatever), compared to using the same actual DB? Doesn’t this just mean that, for anything involving the DB, test success or failure doesn’t tell you if you’d get the same results with the production system?

Some tests don’t interact with the DB at all, of course. In that case, aren’t they isolated enough to not talk to a DB at all and therefore not need an alternate one? (I’m thinking unit tests here where you deliberately mock the stuff that the test isn’t about.)

What am I missing? Why are we talking about other databases?


Testing against an in memory database means that you don’t need the full instantiation of PostgreSQL in order to run a test. Spinning up a full database takes time and resources. Each test needs to start from a clean slate.

An in memory database spins up instantly without needing to have a disk to persist the data to. It can be created and disposed of numerous times - once for each test. That way the database is always in a clean and known state when the tests starts - none of the other tests have impacted this test.

From msdn: https://docs.microsoft.com/en-us/ef/core/miscellaneous/testing/in-memory

The InMemory provider is useful when you want to test components using something that approximates connecting to the real database, without the overhead of actual database operations.

In most cases, the actual database connectivity is abstracted away. The developer isn’t writing sql for PostgreSQL or MySQL directly. The developer is working with an object relational mapping layer (ORM) that abstracts the raw database away. LINQ is one such that exists in C# ( https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/linq/ ).

For these cases, it is important to trust the ORM will do the right thing when dealing with the database.

Another aside - often in memory databases are able to map their implementation to the quirks of a traditional database. An example of this is hsqldb and its ability to accept the grammar of different SQL databases - http://hsqldb.org/doc/2.0/guide/compatibility-chapt.html