Software Architecture Suggestion: API + Web UI instead of Razor

I like visual updates. For example, if you open a page and are typing an answer on a new SO question for 15 minutes, as you post it you are not looking at a current page, and you don’t see any answers that happened. Whatever you see is not accurate. Someone may have posted the same answer already, and without updates if you want to check, you’d have to open another tab and download the entire page again.

Just keep it updated like social media. It’s much easier, and it’s a nice visual experience for users that are used to it. Notifications when you get a badge or when a user posts a new answer on your question are nice.

One (likely non-MVP) option would be to make it user configurable. That way those who want to keep Javascript on for interactive features but don’t want pushed notifications can have it their way.

But posting would cause a refresh, so you would be looking at the current page.

But then you have to change or remove your post if the author added clarification, or if someone else posted the same answer. It saves time to see it happen when it happens.

1 Like

Dynamic update notifications is definitely something that we want. SO does this well, “click to load latest edit”. We don’t need it for the MVP though. There’s a bunch of ways to do this, depending on what js framework(s) we choose; that is a discussion for another day.

9 Likes

To all the proponents of static pages and server-only rendering, do you realize that this forum for instance is a SPA? A properly done SPA does not mean client-only or JS-only, it means that (when available) the client can work on its own, vastly improving the experience for the users:

  • Reduced wait time when switching pages. This also means much less bandwidth since only data, and not full pages, need to be transferred. Mobile-friendlier.
  • While the JS code is nowadays on CDNs and cached, this does not help at all regarding parse and execution performance of the script. A SPA prevents this overhead by design. Again the mobile experience is affected a lot here, since these devices have much less computing power than big PCs.
  • Less server load, which translates into better scaling to more users and better user experience in total while reducing costs.

The reasons mentioned here for not using SPAs, such as indexability/SEO, no real URLs, no JS-less experience etc. have all been solved. These concepts are all well proven and while I agree that a static page does conceptually work well for a Q&A, I do not agree that this means that no API+SPA with isomoprhic rendering should be used.

Is a web site that drains the battery faster really mobile-friendlier?

BTW, after the first discussion os SPAs here, it came to me that some of the strange (and before, unexplainable for me) bugs on YouTube can be easily explained by it being SPA, and JavaScript state not being properly disposed of between “pages”. And sure enough, now that I’m usually opening internal YouTube links in new tabs (thus forcing a true page load), I don’t regularly see those bugs any more.

1 Like

Do you have something to back this claim? The SPA should use significantly less CPU and network resources, since there is much less to do on every page change. And if you mean background activity, then this is not a SPA problem but rather a notification-system problem, which exists just the same for a non-SPA with notification system.

2 Likes

Well, it is admittedly a guess, but a reasoned one.

You need to render the new page in both cases, don’t you? And I don’t see anything else you have to do in a non-SPA context. In particular, you don’t have to execute any scripts for rendering (all the work is in the — hopefully highly optimized — native code of the browser).

The only way I can see how my guess could be wrong is if an active WLAN connection draws more power during download than otherwise. In that case, the WLAN energy cost may be more significant than the CPU energy cost.

Note also that you mentioned less server load. How would that be achieved, if not by offloading that load to the client?

1 Like

You have to download HTML not just data.
And you can’t reuse data and HTML cached from one page to the next.

JavaScript is compiled too – not just interpreted – and uses those “hopefully highly optimized” native DOM APIs exposed by the browser.

Yes I think that radio uses power – e.g. for reference I found this at semi-random using Google.

Network interfaces consume a large portion of overall device power (Krashinsky and Balakrishnan, 2005) and consequently, a great deal of work has focused on reducing the power consumption of networking components. Ultimately, the amount of power consumed is directly proportional to the amount of data transmitted (Feeney and Nilsson, 2001) and in some instances require 100 times the power consumed by one CPU instruction to transmit one byte of data (Liu et al., 2004). Therefore, the power consumption of the network interface can be reduced by reducing the amount of data transmitted.

Cacheing, presumably – i.e. It’s less “load” to cache data and/or DOM on the client than to keep re-fetching data from the server and re-building and re-rendering the DOM on the client.

Loading the server means that developers implement a lot of caching on the server, incidentally, of pages and/or data and/or HTML page fragments.

1 Like

Render yes, but not the full page, only part of it. HTML is chatty, even with GZIP (which needs some extra CPU power for reducing bandwidth). Generating fragments of HTML from code and/or templates is unlikely to be more expensive than transferring a full page over the net, parsing HTML, parsing CSS, parsing and running some JS (very likely to find jQuery and the like even on pages which are “statically” rendered on the server), layouting and rendering a full page to screen.

Radio is indeed very expensive on mobile networks, especially pre-LTE, but also comes at a cost in the WLAN. It’s difficult to find a universal figure for this; a weak WLAN or mobile connection for instance will draw much more power due to high power radio transmission. Interesting (only loosely related) fact: fewer hotspots does not equal to less radio exposure; the weaker signal can actually increase the exposure due to high power transmission by the client and thus also increase its power consumption.

As mentioned above, yes the server does get offloaded, but not purely at the cost of the client. In fact the client does not have to do a full render/transmit/parse of everything but just small parts. So yes, some load is moved to the client, but the client also profits from having to deal with much less data, even if it needs to do a bit more with that data. Or in simple terms, it’s just more efficient resource usage overall, not having to do the same work over and over again (on both server and client).

1 Like

@avonwyss You are probably right in some respects but in others not so, IMO. For the amount of HTML content we will be offloading on a page request, page size is negligible.

One big problem with SPAs is that you can cache the files but not the view. Which can be very huge for us side-note - interesting article Especially on question pages and the homepage etc.

I sincerely doubt that running a standard page request is going to negatively impact mobile battery compared to a SPA. If the most impactful thing to battery is bytes downloaded, I guarantee that an SPA is more data than a few normal page requests.

In an SPA any action has to make a request back to the server and receive that request. On top of the bytes for downloading JS files which again, is considerable.

Obviously there will be a point in which an SPA exceeds normal HTML requests but i imagine the average number of requests that a user will have on-site will not exceed this.

But even if all that is not true, ultimately, going with an SPA approach means we cut out non-JS users altogether, which is unacceptable for us. We identified this as a prerequisite.

The only other alternative is to go isomorphic but a lot of smart people made the decision to go with .net for good reasons. It’s easy to hand-wave this but to me, it’s insane to choose a back-end stack purely based on the fact that we want an SPA on the front end.

As of writing, React and React-DOM gzipped and minified is ~35kb. Dont forget you also need your application code which is going to be considerably larger than 35kb. You are loading the code for every view in your app at once.

For some kinds of requests this is worth it but what about all the single requests to view one single answer - on-site time might be 20secs which 19.5 secs is just reading the question and answer. you’ve loaded all that data just for that.

Also hand-waving SEO issues is not particularly helpful. Read this. SEO will be hugely important for us. This is not something we should be hand waving with “there’s solutions for that”.

5 Likes

And as an avid reader I have easily read 100 or 1000 or 100,000 pages (not all in one session of course).

I don’t understand your argument – just one small SE page is 115 KB or 35 KB compressed – that’s just the HTML source, not including the JS and CSS which the page includes and which is probably cached.

Sure, your decision might be cast in stone – so perhaps this whole topic is off-topic – but there’s no need to justify the “no SPA” decision by deploying what seem to be counter-intuitive claims about performance and mobile battery life.

My https://react-forum2.herokuapp.com/discussions seems to be about 900 KB. That includes React, React Router, and my application-specific JavaScript, and the HTML templates in that – that’s for all the pages – and (because of the nature of the demo) all the back-end data too (250 KB of JSON).

If one HTML page is 115 KB then the break-even point (or the point at which an SPA becomes more network-efficient) seems to be about 8 pages – i.e. if you read more than 8 topics then the SPA invlves less data from the network.

Maybe I’m wrong, misinterpreting what I’m reading, but that’s how the numbers seem to me.

2 Likes

I completely agree that non-JS usage and SEO are important features! As I wrote in my post, I think SPAs should always be used in a isomorphic fashion and thus also enable basic usage without JS enabled and indexing of the site with real URLs.

I’m all for C#/.NET on the backend. There’s no need at all to pick another back-end to do proper isomorphic rendering. In fact, we (my company) are successfully using ChakraCore for isomorphic rendering in a .NET server environment. Basically, MVC views are using a view engine which uses ChakraCore for rendering the markup and thus we can share the front-end code for HTML rendering on the .NET server.

Unless you completely rule out any dynamic functionality on the site, you’re going to end up with application code and some libraries (jQuery and whatever) which you also have to load. So I think this argument is somewhat moot.

Also, it is not true that all code must be read for every page. Lazy-loading of modules and/or bundles has been around for a decade (AMD which has wide support and several loaders including RequireJS and others). As I wrote, all these problems have been recognized long ago and there are solid solutions for them around.

Last but not least, the SPA+API approach enforces a proper separation of concerns for the UI and back-end, since they are by design separated by a properly designable interface. An API design is just as important as a proper DB design, and should be the only interface to the backend IMHO.

2 Likes

It’s CPU cycles, last I checked. And an SPA imposes additional (computing) load on the device, on top of normal HTML/CSS rendering (which is there regardless), so…

1 Like

Because any SPA served isomorphically will be downloading pretty much the same HTML as well as the same CSS but far more JS. Between the HTML and CSS thats all you need for a traditional approach. You’re going to need upwards of 100kb gzipped of JS for an SPA. It’ll likely double request size. Yes you can split JS and CSS per page but we can also do that for the traditional approach too so it doesn’t net us very far.

Nothing what i have said is counter-intuitive. Not a particularly helpful comment. I’d also like to point out what I say, or any other person on the core contributor team says is not gospel. We make decisions based on merit and I don’t think i’ve seen a solid argument for SPA over traditional approach where it’s made me think “yeah, if we don’t choose SPA now we are shooting ourselves in the foot for later.”

My whole point about SPA is not battery life of a mobile device or performance (although performance is certainly a factor), that was mentioned by another user. Its mostly SEO, users with JS disabled and caching. And also over-complicating our MVP for little added benefit.

If we need to make the move to SPA later (much later) then we can do. But i just don’t see this as something that is applicable in an MVP. I’m not even convinced its applicable full stop - i think the overall benefits are arbitrary, TBH.

2 Likes

On every non-SPA page the browser needs to parse the full HTML, CSS and JS sources, plus layout and render the full page, and (if some scripts are used) initialize those as well. This all is certainly more demanding on the CPU than some data transformation and partial HTML DOM replacement in a SPA…

You think of a SPA where the server renders the HTML? That’s not what this topic is about, since it explicitly talks about an API-driven approach, which implies that the client does the transformation of data to HTML. And in that case there much less to download after the initial page load.

When JS is disabled the browser should not download the JS anyways, which makes isomorphic exactly as efficient as “traditional” for the SEO and non-JS use cases.

I’m not sure why the arguments brought up in this thread do not make sense to you @mattjbrent and @Marc.2377 so I guess it may be best if I step out of the discussion for now.

1 Like

One of the things I hoped to get from this topic was an informed opinion about why you would or wouldn’t (e.g. for this project), and how – so, thank you for your input.

I didn’t even know that ChakraCore existed now as a separate, embeddable component.

As someone very familiar with one engine in particular (Firefox’s gecko note: mirror from mozilla-central), I must disagree.


I’m not sure why the arguments brought up in this thread do not make sense to you @mattjbrent and @Marc.2377 so I guess it may be best if I step out of the discussion for now.

I’ve just been letting some side discussion roll among those who would be interested in it, but, as far as I’m concerned (and speaking for the core team in general, AFAIK), a point has been made here (in #9):

[edit note 1]: …and in #6 before me (note that even the first part of the sentence is also a settled matter by now):


[edit note 2]: Figured that my continuous (and offtopic) engament in this discussion was under risk of being misinterpreted, so the above is an official statement of sorts.

1 Like

SPA (Single page applications) have both benefits and disadvantages. The same is true for any system, including our current tech stack C#/ASP.NET Core.

While I think that SPA’s, especially with isomorphic rendering and using ChakraCore, might be a robust solution for speeding up our website, I think that we don’t need this yet. There won’t be much interactivity after page load needed for now and probably for the next few years. Q&A pages can differ too much (compare: question list page, question with single answer, question with many answers, mod tools, tag page) to use content replacement as effective as possible, unlike for a chat application for example.

It appears to me, that there are tools – such as ChakraCore – which will allow us to “upgrade to SPA-level” later, if it appears to be needed.

Having a SPA now, will require us to write a lot of logic twice, as there was agreement, not to simply load and insert the full HTML from the server. While we don’t want to make decisions too fast, we still want to have a useable product in a few months.

I am not generally opposing SPA-technology and I think it might be a useful enhancement at a later time. There were some convincing arguments in this thread in favor of single page applications.


However, we must also strike a balance between reconsidering every decision once we get new information and consistency in order to start building. I am generally open to reconsider decisions, but there must be strong arguments, which outweigh the disadvantages. In my opinion this is not the case here.

Hence, having read through all these posts again and after consulting with our technology lead @Marc.2377, the official decision is:

This is rejected for now.

This question isn’t declined forever, though. We might want to reconsider it in a few years. I am just saying, that we won’t reconsider it now. I am sorry, that some people might be disappointed by this. I hope that these stay with Codidact and help as good as they can and want. We’ll need people helping with the front end, too, so you can use your JavaScript skills there as well.

6 Likes