Considerations about the tech stack and architecture

Given that I’m a new here, could you tell me why or where did you decide to depend on .NET in your implementation?

My experience includes some years of full-stack ASP.NET – and fwiw I recently enjoyed using React.JS (though with TypeScript not JavaScript) instead.

I found it much more powerful/expressive/liberating than ASP.NET, e.g. for coding reusuable UI components – and e.g. for modern web design like " single-page application", dynamic page updates, and whatnot.

I realise the SE uses the MS software stack – I used it too for ages – but I don’t know why you think you want to?

2 Likes

That was decided on some while ago, back at the start of the project - before I even joined. There’s been some debate about whether it’s worth re-deciding now we have more people, but so far, that’s what we’ve got. FWIW, it’s not the stack I’d choose either, but it is what it is.

This is not an SPA. Q&A software done well is too large and complex to be an SPA. Dynamic page updates will be handled via custom JS instead.

3 Likes

Absolutely! In fact, I have heard the siren song of SPA for a long time and it hardly ever turns out to be that simple.

1 Like

Transferring JSON instead of HTML is a splendid reimagining of the web architecture, now that browsers are smart and standard enough to support it.

Thin server: what’s not to like?

And I really like that the code which edits the DOM in the browser (to show new data or to respond to user input) is the same codebase which creates the DOM in the first place, i.e. also in the browser (instead of having that elsewhere i.e. on the server – two code bases each with its own language and library and data).

I also like the separation of concerns – data on the server, presentation on the client. With ASP I fit into with the page lifecycle, it’s a framework. Whereas React is more a library than a framework, so it doesn’t constrain how I organise my code. And it doesn’t pretend to be a web server, it’s all less monolithic.

The dev environment (VS Code etc.) is something else – edit in one window and see the result in another, no explicit recompile.

And SQL is all very well but by the time you’ve defined the schema (DDL) and CRUD the data (ORM), just to get it into an intermediate server-side format (C#) which you then have to interface into an ASP template – one can know how to make it work, of course, and I kind of understand why SE did it that way (however-many years ago that was) – if seems like bondage-and-discipline to me now though.

I discovered the #voting channel on your Discord, so I can see how that decision was made. I’m pretty sure that if I were doing this alone, though, I’d use a more modern stack. IMO it’s amazing how much easier – more agile, terse – development might be now.

1 Like

If “single-page application“ means what I think it means, I very much not want that. I don’t care if it is considered “modern”. Certainly, a Q&A where you cannot link to/bookmark specific pages would be very much reduced in value.

4 Likes

Correct. One of the keys to Google-ness (OK, all search engines, but Google is the only one that matters) is having a lot of well structured pages linked within the site and from other highly-ranked sites, which requires discoverable/linkable unique URLs.

1 Like

Also, our users need the ability to link to specific posts. (As will our own tools, though maybe there’s another way to do that.) I don’t know what “single-page application” means technically, but if it doesn’t allow deep links, that’s going to be a problem.

3 Likes

I coded an example of one, here’s an example URL …

https://react-forum2.herokuapp.com/discussions/86/pellentesque-sit-amet-ex-vel-libero-feugiat-condimentum

… I think those page URLs work just like Stack Overflows (and so, can be bookmarked) – the fact that it’s implemented as a single-page should be pretty invisible to the user – it’s not one of the reasons I listed for liking it.

I don’t know, the most visible effect might just be that it’s fast – I added a simulated delay to that demo.

It’s “single page” in that it loads all the UI code once. If you click on a link within the page, then it fetches new page data and updates the browser’s URL. There is some standard hackery in the app to manipulate the visible/browser URL and history – that’s handled by a library though, react-router-dom I think that is.

Technically it doesn’t have to be “single page” (the fact that is is SPA is a red herring) – i.e. instead you could do it so that the UI code for each page is packaged and loaded separately – or bundles of pages, e.g. a huge application might have separate “user” pages from “admin” pages, into separate bungles.

Well, all I get on that page is “You need to enable JavaScript to run this app.” :wink:

Which if I don’t already know for sure that I want the content on the site (and that I trust the site to begin with), I generally won’t do. Instead I will try to find my information elsewhere.

Well, in this case I enabled JavaScript, and yes, deep links indeed work.

But I’m still not convinced. With standard design, the JavaScript would sit in a separate file anyway, so it would not be reloaded on new pages, same with CSS. So the actual page would only contain the actual content anyway, which has to be loaded in any case.

I guess you added artificial delay only to browser GET requests, not to AJAX requests. Because otherwise, you should still see the delay in getting the new content loaded (I assume the extreme time until the page shows up on first load is your artificial delay). Note that network delays are the same for all types of requests; the internet doesn’t have a fast lane just for AJAX.

3 Likes

Do you browse SO with JavaScript disabled?
You can, but voting etc. break.
Google can crawl React.
And apparently you can run React on the server too or instead.
So I don’t know that it’s a non-starter – I gather it is a modern standard.

Well that wasn’t the point though, the fact that it’s single-page isn’t much relevant, those are a packaging and deployment details – various forms of caching.

What’s more relevant IMO, i.e. to the estimate of the implementation time, were as mentioned here.

The above, the URL I posted, is a demo – with no real web server or database. For that demo, my front ends is bundled together with a simulated backend via a simulated REST API, and that back-end is just TypeScript reading and writing some data in memory in your browser – the whole thing is physically just a static web page without a real database – my object was to prototype the UI.

I assume that in the real thing (i.e. if there were a real server for this application) then any page URL could be dual-purpose, i.e. that:

  • If a browser requests a page (e.g. from a bookmark) then it’s asking for HTML (in its accept header), and the server complies by returning the app and the data for that page.
  • Or if the app is already loaded and requests a page (e.g. because the user clicks on a link within a page), then it requests just JSON (rather than html) from the server, and the server returns just that.

It’s really neat IMO.

But it’s wandering off-topic – because you decided to implement (and estimate) something different.

Still, as I said, I think a short implementation time (i.e. within “6-8 weeks”) is feasible in theory – if I were optimistic, and assuming a small number of dedicated developers, and an already-defined UI to sprint with. Which seems to me the same ballpark as a previous estimate i.e.:

Looks like this thread has gone off-topic. I think this discussion should be moved to a new separate thread, if need be.

4 Likes

Since it indeed went off-topic, only very short:

Not today. But when I discovered it, yes I did. And if it had shown me an essentially blank site back then, I would have left and never come back, just as I do almost every time this happens. I don’t know how many others do this (I do know I’m not the only one), but note that as site owner, even the most sophisticated analytics won’t ever tell you about people who don’t even visit your site, so you can’t know how many people you lose that way. I’d expect the fraction to be highest on sites aimed at technically skilled people.

SPA does not imply not being able to deep-link to specific things inside the app (posts, comments, whatever). You routinely use systems that function this way without even knowing the difference. The router that parses the URL path and loads the appropriate content may be on the client side or server side and given the way modern browsers work you won’t notice the difference. Least ways it won’t be apparent from your URL bar.

I’m not trying to advocate for SPA, I don’t really think that’s the way to go here, just saying that this objection –and indeed several of them here– are not actually objections to the architecture.

2 Likes

Would you – share with me why that’s so, or to link to where that’s already explained?
I’d like to know, both to better understand React and to better understand this project.
Because I thought the tech is quite mature now, and used publicly, so…?
And as you said, several the more obvious objections aren’t necessarily inapplicable.

SPAs require JavaScript. A non-trivial number of users have JavaScript disabled (the last statistics I saw were somewhere between 1 in 20 and 1 in 50 users with JS disabled). Serving a site that doesn’t work to these users is terrible UX; adding a fallback so that the site does work for them means basically writing exactly the same software as we would if it wasn’t an SPA in the first place. One way you lose users, the other way you’re doubling the work you have to do.

Many of the benefits that SPAs bring can be included in other ways instead - there are libraries like Turbolinks that can make navigation smoother, and front-end frameworks or even just custom JS that can do dynamic data pulls and page updates.

Finally and most crucially, SPAs are not something that every developer is familiar with. The default, as far as one exists, is a multi-page web app, which the vast majority of developers are familiar with. As far as a project like this is concerned where we’re relying on volunteer time to build everything, we want to maximise the number of potential contributors, not reduce it.

5 Likes

Might there’s be some standard work-around or best-practice for that too?

There’s what’s called Universal React – for which there are several implementations, “Razzle” et al. – could that be relevant?

Am I wrong to imagine that it is widely used on public web sites now – which, proves there are no remaining technological barriers?

Yes but then there are those two realms again – i.e. front-end and back-end – which you cited above as objectionable.

I’ve done that in the past using ASP.NET and JavaScript – but I did find it neater and simpler though, to have one source code realm, with React – so the code which updates the DOM is the same language (e.g. TypeScript), and the same library (React), and the same data types, as is used to create the DOM in the first place.


That might be the core of it.

Now quoting from another thread, Marc wrote …

… do you know what that means? Especially “lasts” and “better”?

I imagine some of that is project governance and community relations – so not part of the software implementation itself – but are those, also, “non-functional requirements” for the software?

How does that affect the implementation?

Choosing the stack which everyone knows means you’ll be biased towards using the stack which people started using 10 years ago.

My theory is that the newer stack has become popular for some good reason – and yet it’s not so new as to be bleeding edge – and maybe it’s time to adopt it for some competitive advantage.

Writing for ASP seems to me more like driving a juggernaut than a sports car – oh it is feasible alright – not sure it’s the right size though.

Could it be that choosing it is a case of Nobody Ever Got Fired For Buying Microsoft? Which, 30 years ago was “Nobody Ever Got Fired For Buying IBM” (mainframes)?

I mean, Ok, you have to pick something, and any decision is better than none.

Apart from that, though, I don’t know why people think it’s the better decision – I guess I know ASP pretty well and only beginning to work with non-MS stacks – but there seems to be much less boilerplate to it, so it’s more expressive and flexible and productive – even though I was learning the React-related tooling and designing with it for the first time.

So I don’t really get the decision – why people assert confidently that ASP is the right choice for this project. Yes the MS stack was good enough was SE was first written. Is that (emulating that design choice) what you’re doing? What does “decided that we will build something that is different in that it lasts” and “we want to do better” mean, then? And if you want to compete, then, don’t you want a technological advantage (if there is one)? I’d like to know that – i.e. to share the vision.

I can’t say I’ve seen any of that going on, actually. .NET Core is the stack that was chosen when this project started, based on popular vote - i.e. which stack was the most popular among those involved in the vote, not which stack is the one and only for this project. That may or may not be a good way to choose a stack - but there’s something to be said for popularity, because you’re more likely to get folks to work with a stack they’re familiar with or like using.

On the contrary, I have nothing against separate front-end and back-end. Again, that comes back to the idea of it being the default - the majority of major web app development tools/frameworks do separate the two, so it comes once more to the idea of how many people will be familiar with it.

React is… still in its adoptive phase, as far as I can tell. It’s used on some public web sites; I’m not sure I’d go so far as to say widely. It’s certainly popular in hobby projects, but that’s not representative of large production-grade web apps.

Here we have the core of it. Fundamentally, there isn’t any available technological advantage - some stacks are better than others for this type of application, yes, but you could viably write it in any number of stacks - React SPA, MEAN, API+Vue, Ruby on Rails, PHP+Laravel, ASP, .NET Core… certainly many others I’m missing. Once you get down to that choice, there’s realistically no major differences between them - each will have benefits and challenges of its own, but none are fundamentally unusable - and certainly none have “no remaining technological barriers”.

The choice of stack is mostly arbitrary - it’s just down to picking one where we can get as many potential contributors as possible. .NET Core fits that bill so far.

4 Likes

In case this was directed at me: yes, of course - I believe I do. These terms reflect a desire for this software project to still be extensible and maintainable years down the road, something that I’ve yet to see with any existing implementation. While I’m nowhere close to agreeing with everything Robert Martin says , he makes a good point in his book Clean Code [2008]: in a software project, 80% or more of all work is not spent in the initial implementation, but rather in maintenance.
(Well, to be honest, this wasn’t said by mr. Martin himself, but actually James Coplien, in the foreword; and whether 80% is a precise figure is detatable [1]. The point, however, still stands.)

Most existing (FOSS) implementations I’ve seen where developed in a “I’ll figure it out as I go” fashion. At least two turned out to be good enough and managed to be usable in production, namely, AskBot and Question2Answer. I can’t speak too much for the first. As for the second, the architecture is, nowadays, fairly maintainable it seems to me, provided you don’t wish to deviate from general functionality too much. And yet you can look at their commit history and figure out how long it took them to refactor DB queries, data structures and important mechanisms (such as authentication and caching) until something maintainable was achieved. Took long enough, I’ll tell you that.

Interestingly enough, in AskBot’s initial page you’ll find a link to this Q&A: Why design is not better ?. Here’s what Evgeny Fadeev, the primary project maintainer (user id 1, mind you) has to say:

My guess is that the existing implementation is so bad that improving the design by tweaking what there is is pointless.

What is needed is a complete rewrite of the front end, probably not using the django/jinja2 templates, b/c it feels outdated compared to the modern UI approaches.

So. When I say that we want “something that lasts”, I’m referring to an implementation that is flexible enough to allow for adding improvements 5+ years from now, without anyone feeling distressed and discouraged to the point of considering a complete rewrite instead.

When I say that we want to build something “better”, I allude to the fact that SE has a damn good product to offer, and whatever issues me and other people are having with their services, the software is not one of them. Ultimately, I believe I speak for the team by saying that offering a product that is polished enough and on par, feature-wise, with SE is an essential goal if we are serious about replacing their platform.

Their implementation is ASP.NET and IIRC there have been talks over the last couple of years about switching to the obvious evolution to this stack, which is Core. We picked this as we feel it’s a worthwhile choice.
Note that the team didn’t vote for the stack they’re most familiar with - the majority isn’t. They (actually, we) voted for the stack we’re willing to learn and work with the most, which also happens to be the a stack we think is a great choice. ASP.NET Core is nowhere near 10-years old, it was first released in 2016 and has been considered production-ready for about 2 years, give or take.

Now, hopefully you are familiar with the distinction from ASP.NET, which is based on the much older and mature .NET Framework. .NET Core is similar enough to .NET, and yet it is a substantial evolution from it.

C# today features vast improvements compared to the C# from 10 years ago. Tooling is great, and totally free for open source projects like our own. The language is expressive, safe and performs quite well, while not imposing one specific programming style on everyone. By ‘safe’ I mean you can trust the compiler and environment to prevent you from messing a lot of things up. If that’s not enough you also get great static analysis tools. The debugger is widely acclaimed and the IDE offers more-than-decent refactoring tools. The production stack is fully open and support for OSS datastores is respectable.
You’d be hard-pressed to consider using a Microsoft stack with Postgres and host it on Linux a few years ago. Today things are, thankfully, different.

This is all to add on top of what Art said.

2 Likes

This is the key to me. My experiences with ASP many years ago, and with Microsoft in general, have turned me off to using a Microsoft stack for any project. But being able to work with Postgres means that a ton of other options are available in the future without changing databases, and that we are not tied to the expense (obvious) and other limitations (not so obvious, and more open to debate) of MS SQL. Hosting on Linux is also a key - I am still (and will likely be for a while) in Windows on the desktop, but with occasional rare forced-by-others exceptions, I have not hosted any web stuff on Windows in many years - and I would be very reluctant to get involved in this project if that were the decision here.

It’s easy to see how the data is relational.

Do you foresee any future requirement for “tree-like” data? E.g. “nested subsections” of a document, or e.g. “threaded conversations”, or …?

If there were such a requirement (or feature request) in future, do you know (can you anticipate) how you’d support that at the back-end, i.e. how you’d store that in your database engine?