Eternal Sunshine of the Spotless .NET: Why We Adhered to .NET and How It Paid Off

10 years ago, when I was starting to work with .NET Framework 3.5 (language version 3.0), its functionality was extremely limited for me because I began with SharePoint 2010. After studying a wider range of technologies and having an eye on .NET development, I witnessed its growth from a doubtful Java competitor to a cool cross-platform able to start daemons for Linux (though it was supposed to work with Windows OS only). Of course, when I faced this technology for the first time, it seemed like everything was enough: I always found a way to solve a task. However, now that I have been working on different platforms and versions, I am sure that life was more painful at that time.

If you want to look back at that era and to compare its older and newer versions – this article is for you. I think it can be useful for beginners who don’t know much of previous versions’ features and for those who want to get nostalgic.

The Pain Era

When I started developing, SharePoint 2010 was working on .NET Framework 3.5 and already included LINQ and AJAX. Still it was too limited by the platform as it was hard to widen it and find any appropriate tools.

Pain #1: Single app creation

Earlier, developing web parts for SharePoint was based on WebForms and each web part was basically a separate application. For me, it was impossible to create a unified app under such conditions. That’s because while developing web parts, database connection context was initialized for each one of the web parts. Therefore, creating a single unified context was impossible. For instance, in order to output data from the base to the pages, I used SqlDataSource by a separate database connection in a widget. So, in order to connect to 3-4 tables I had 3-4 DataSources on a widget which affected the page load time. By that moment, ADO.NET Entity Framework had already appeared, but it was inconvenient to use it for SharePoint until version 4.1 due to product’s interconnection problems.

Pain #2: Inaccessible support and templates.

We wrote web parts for SP 2010 by using SP 2007 technologies for web part creation as there were no templates or support for a 2008 version. When Visual Studio 2010 was released, these templates were ready and helped us a lot. Now we could make lists definitions and edit their code in the Studio, make site templates (by coding necessary content types and list descriptions). Before, we did it manually by editing XML files, and that was painful for beginners in developing on .NET. Developers could not understand what file was being edited and why, and focused only on what’s written on the forum.

Pain #3: Asynchrony

.NET Framework 3.5 did not support asynchrony, and one had to run certain code in separate flow and use event-handlers and delegates. Also, we could use BackgroundWorker in WinForms (as a second parallel process for separate tasks). In other words, it was possible to code asynchronous applications, but its implementation was beyond the junior developer’s understanding.

Task Parallel Library appeared in .NET Framework 4, which meant that we also got tasks. So, we could not assign delegates but just do the task and pass the action to it and run it in a parallel flow, being aware of its status/condition and receiving a signal of its implementation when necessary. It was a great step for parallel development when you need to process big data which was previously done with a high entrance threshold.

…and frozen windows

Web differs a lot from console applications development (by them we mean not a standard meaning but the one that is used in thesis description: not ConsoleApp in particular, but all the apps that are run in OS interface). In console application, all operations are performed synchronously by default, and if it takes a long time to process, the interface will “freeze”, as if the application is frozen. In order to avoid a feeling like the program doesn’t respond, we were doing all operations in a separate flow and introduced progress bars: so that a user could see the app activity and manage it from another flow via delegate, for instance.

Pain #4: Deployment is coming

One more painful technology appeared in .NET Framework 3.5 — it was MS AJAX. UpdatePanel content was updated from backend, the rest wasn’t used at all. It worked bad in SharePoint due to the specificity of controls initialization in page lifecycle. In our case, it worked after the first post-back (sometimes after the second one), so it was hard to make MS AJAX work normally in a moment, though with WebForm it was easy to use UpdatePanel. It was not possible to use classic AJAX (XMLHttpRequest) in that SharePoint version because for every action, we had to write an event-handler in the backend and add it to a folder of every web part. Sometimes it was impossible to expand this functionality.

When I was working in parallel with other applications written on WebForms for the tasks related to SharePoint, I was surprised that project deployment on SP was a problem only for SP. In other cases applications were initialized immediately: the page loaded and worked (magic!). Deployment in SP took from 2 to 3 minutes, and you were in a constant cycle: deploy — take a break — see a bug — fix it — deploy — take a break… Over and over.

Everybody understood that it was a long deployment and mini-breaks process. But I’m grateful for this pain. Thanks to it, I learned how to generate more code and make less mistakes in a single development iteration.

Pain #5: Windows and nothing more except for Windows

At that moment, .NET was still positioning itself as a platform for Windows development. There was the Mono project (.NET for Linux) which appeared to be an alternative version of the major Framework. Even now you can see the list of features that haven’t been added according to Framework versions on the page of project (www.mono-project.com/docs/about-mono/compatibility). When you developed something for Linux, it was far from being user-friendly because Mono didn’t have such support and community. And if you referred to some unimplemented things, the code could just break. In other words, if you don’t start developing on Mono, there is no chance for writing a code independent from platform. I’m not saying that this project isn’t valuable for .NET development as a whole, as there would be no Core without it, however, I didn’t manage to get useful experience from working with it.

The Pluses (or Painkillers) Era

Using pure .NET of the latest version in projects has solved almost all these problems. Framework has many pluses too now, but now let’s talk about pluses related to Core as I worked with it.

Plus #1: Speed

When .NET Core appeared, we got an opportunity to make usual operations faster and cooler. Final applications, according to some data, work 5000 times faster than their counterparts on .NET Framework. Still, compilation and launch take longer time.

Plus #2: Cross-platform

The main advantage of .NET Core is that the code works on Windows, Linux and Mac simultaneously. Moreover, you may write an application on the microservice architecture of an asynchronous logging service via a message queue. I remember the time when I have been writing mostly for Windows. I’ve been writing daemons (services) for Linux and they worked stable and fast from the first time and all the system worked smoothly: in application, API-service and message queue itself. This is just amazing when you write the code on the language familiar to you on the platform which wasn’t meant to be on this OS!

Plus #3: Asynchrony of everything

There is now an opportunity to write backend not in parallel but fully asynchronously, which allows moving some tasks from the main flow to special asynchronous methods or code blocks. This, in its turn, allows writing beautiful pure code released from bulky constructions: it’s easy to understand, asynchronous methods are written as synchronous ones and work normally.

Plus #4: Libraries loading and less intensive memory devouring

If we look at the current C# Version 8, its changes are fascinating. First, previously, we did not have an ability to dynamically upload the initially unloaded DLL (we dynamically loaded libraries into the project, but they remained in memory). With the release of the 3rd Core, it is now possible to dynamically load and unload libraries according to the objectives. For instance, if we want to create an app of searching through files and the user chooses the XML extension, we can dynamically upload an XML parser for documents and search through its tree. If we want to search on JSON, we begin to search through its body (libraries that depend on certain conditions), and there is no need in keeping them in operational memory. And yes, the app stopped devouring memory. When we unload the assembly, we release all the resources that were clung to this assembly.

Plus #5: Tuples

The language is still fresh and growing. Many cool things were added to the latest version: for example, tuples. They existed before but as a separate type that consisted of many elements. Now it has been transformed into tuples when we can create such method that returns not one but several objects. Previously, in order to return more than 1 parameter it was necessary to assign an output/reference parameter or to make another type and keep on with it, and now you can return it as a tuple.

Summing it up

Many developers stick to the opinion that before it turns better, they do not know everything was bad. .NET Core is open source, so everybody can add their own features and write about their pains. There are many controversial moments there: somebody is waiting for updates that seem absolutely inconvenient for others. For example, managing Nullable Reference by types appeared in the Version 8 of the language and the question about its convenience still exists: the innovation was claimed in 2 previous versions but appeared only in the final Core 3.0. This feature is not activated by default because its activation may lead to the failure of a large project. But if you’re writing an application from the very beginning, it is extremely useful and allows creating a more transparent app.

In my opinion, the existing platform is already a strong player on the development market with a relatively low entry threshold (there are even lower ones but it’s harder to work on them). Of course, the platform choice depends on the whole range of influencing factors and aims. If it is a complex application that processes terabytes of data and should be checked till each byte, then it is complex programming on C++. Still you should understand that it takes half a year for development and 2 years for its modification, so by its launch it will already be outdated. Moreover, there are not many people who can code on C++. And if we speak about Enterprise development, when you have 2 weeks before release, then it’s reasonable to choose another technology which helps to get things done faster (Java, .NET, php).