Home | Vaughan Reid's blog
A common issue that you encounter when you need to consume an external api is when there is a rate limit for users of the service. The api will only allow x requests in a time period otherwise it will return a HTTP 429 Too Many Requests Error.
A friend of mine has started to dabble in C# and he was surprised that there doesn’t seem to be an equivalent to the Python Combinations library. To be honest its something that I didn’t really know I was missing and my first thought was that its a simple enough problem to solve. But when I started trying to solve it, it turned out that I was very wrong.
No most probably not but with the right setup it can make it a quicker process. People often see code reviews only as a way to enforce coding standards in an application. While this is important, for me its more about communicating changes to each other so that information isn’t siloed. That is why its perfectly reasonable for a junior developer to code review a seniors changes.
With the recent announcement of .NET 5.0 I’d like to spend some time going over some of the nice features you will get out the box if you upgrade.
I recently started using Refit in a project and its a really nice way to consume rest endpoints. Once you set it up, all you have to worry about is making a call against a interface and Refit will do all the work for you.
It can be fun to challenge yourself with algorithm type problems and to try find an optimal solution. Unfortunately the only way to get better at them is to practice. After enough practice you might start to spot common patterns to solve a new problem just based off experience.
We recently started using MediatR in a project and its a great way to make your services simpler as requirements grow. I want to show a simple example to give an idea on why you would use it.
An interesting class that is useful to know about but you most probably won’t directly use is AsyncLocal. It stores data local to the async flow. This means that if a method further down the chain changes the underlying value, the change will only apply to that and its children. A potential use case might be if you want to have access to a trace id on an internal request without having to manage some sort of cache.
I want to convince you to try Property-based testing in your projects. Its a really powerful approach when combined with classic unit testing.
Following from my previous post on getting paged data from twitter, I thought it would be interesting to use ML.NET to do a little analysis. A common problem solved with machine learning is doing sentiment analysis on text to evaluate how positive or negative something is. To solve this problem with ML.NET really isn’t that much code and so its a nice entry point into their machine learning offering.
A really nice feature in C# 8 is async enumerables. A popular use case is if you want to consume paged data and potentially aggregate each page of data without having to get everything first. Previously the consuming code would have to handle the paging but now your method can just enumerate over the IAsyncEnumerable and process the data after each call. Its actually simpler than I’m making it sound! I created a small console app that will hopefully show you what I mean. If you want to run it locally you can get the source here.
With more and more companies moving to the cloud, the idea of serverless execution of your code has more and more appeal. AWS has Lambda and Azure has Functions. The idea is the same: You give them a method that you want to execute and the provider works out the scaling. As you get more requests, it will create more and more instances. The obvious benefit is that you don’t need to worry about where or how your application is running. Its a cheap and easy way to execute code in the cloud.
We all agree that we want to have tests that cover as much of our application as possible. Right? The general approach with ASP.NET applications has been to extract meaningful business logic out of your controllers. Pretty much everything that needs to be tested would be injected with an interface so that its easy to mock and test your business logic.
When looking at performance of an application, there is a trade off between when you should be using value types and when you should be using reference types. People often talk about minimizing allocations to the heap for reference types but allocations themselves are actually very quick. The reason why its an issue is that those allocations need to be garbage collected eventually. This is time that the application is spending cleaning up memory instead of running your code. When looking at allocations, value types are great because they generally are just on the stack. Once the method is finished they are gone. Of course with everything there are trade offs, the stack is more limited than the heap and so you have size constraints on them and the fact that the whole object needs to be copied when passing it around.(side note: value types aren’t always on the stack. They can be on the heap in some circumstances! eg: when boxed, fields of a class or elements of an array.)
A new feature in C# 8.0 and .NET Core 3.0 is that of default interface members. The motivation for it is where authors of a public interface would like to add functionality to an existing interface but don’t want to force all the consuming code to implement it. You are able to add a default implementation to your new method so that consuming code isn’t forced to.
A great C# feature that was recently released is the new Index and Range operators. Two new system objects were created to make accessing elements or range of elements in arrays shorter and more convenient.
Over the years developers using more functional languages like F# have boasted about how much easier their code is to read and how imperative C# can be. NO LONGER! Well maybe not much longer.
When writing async code, your method is allowed to return either void, Task, Task, or [ValueTask](https://www.vaughanreid.com/2020/06/when-to-consider-using-valuetask-over-task-with-async-code)). Since early on with async code, the advice has been to not use void return methods. But in real life there are times when you don't really have a choice. Since we can't always get away from using them, its good to understand whats the difference to async code returning a Task and what we can potentially do to help make it less *bad*.
If you need to run your long running .NET Core worker application on a windows server, you can easily install it as a windows service with minimal changes.
As of .NET Core 2.0, you can use ValueTask and ValueTask as return types for your async methods. Its important to understand when and if you should use these instead of a Task and what the trade-offs are if you decide to use it.
When trying to improve the performance of your code, the first thing you need to do is measure. If you don’t measure then you won’t know if your changes are actually improving performance. In most cases, there is a trade-off between readability and performance. Often better performing code is more complex and harder to maintain. Its for this reason that you have to be very careful when making changes without actually understanding what the real impact will be.
There are times where you want to add a new action with the same route but you don’t want to break backwards compatibility. One option that could help with this is to use a custom action constraint.
Two terms that are often used interchangeably are concurrency and parallelism. It doesn’t help that in English, doing something concurrently means that you are doing more than one thing at a time. In software its a little more complicated. A good explanation on the difference that I have read before is as follows:
A small but useful feature available in ASP.NET Core is to define route constraints to disambiguate similar routes. **Its important to note that the documentation makes it clear that you shouldn’t be using this for input validation. **
A major positive of the whole world working from home is that events that you would normally have to travel to are now online for everyone. Thats a huge saving in time and money for someone in South Africa. This year I was lucky to be able to attend the Microsoft Build event hosted in Seattle. Because of time zone issues I only attending the event on the 20th May but there was still quite a lot of content that was available in my timezone.
This is the fifth in a series of posts covering measuring a .Net Core application. If you want to try follow the code then you can have a look at this repo: Blog-Diagnostics.
This is the fourth in a series of posts covering measuring a .Net Core application. If you want to try follow the code then you can have a look at this repo: Blog-Diagnostics.
Yesterday I was trying to show how to diagnose a deadlock situation when using blocking code over async. This was a pretty common mistake in ASP.NET and a nice scenario to add to my measurement series. I created a controller like this in my ASP.NET Core sample application and strangely it worked without error.
This is the third in a series of posts covering measuring a .Net Core application. If you want to try follow the code then you can have a look at this repo: Blog-Diagnostics.
This is the second in a series of posts covering measuring a .Net Core application. If you want to try follow the code then you can have a look at this repo: Blog-Diagnostics.
This is the first in a series of posts covering measuring a .Net Core application. If you want to try follow the code then you can have a look at this repo: Blog-Diagnostics.
In a previous post I showed how you can use custom middleware to disable endpoints based on configuration and attributes. It is a good way to show how middleware works but there is an easier way. You can achieve the same result by creating a custom authorization filter.
I previously showed how you can create your own middleware and use the app.UseMiddleware method to insert it into the pipeline.
Authenticating to an IdentityServer4 service with an ASP.NET Core application is actually quite easy once you see it. It does require a basic understanding of the OpenID framework though. I’m first going to explain some basics and then I’ll show the code at the end.
ASP.NET Core has quite a large selection of model binders built in that are able to convert http requests to simple and complex models in your actions. If there isn't one that matches your needs, you can always create a custom one. To show how, I'm going to create a custom model binder that converts csv formatted text to an IEnumerable of models using the great CsvHelper library.
Not everyone knows that swagger gives you more than just a nice UI to expose your APIs. It actually exposes a JSON endpoint that defines your API. Tools like swagger codegen can use that endpoint to create a complete consuming project in an array of programming languages.
There are times when you have controllers or actions that are very useful for debugging issues but you don't really want to expose them in a production environment. One way to achieve this is to take advantage of the separation of the Endpoint Routing Middleware and the Endpoint Middleware components. Another way would be to use a custom authorisation filter but in this case I’ll use middleware.
One surprising issue when using the HttpClient, is the issue of port exhaustion. This often happens when an application creates a lot of HttpClient instances for short requests. When there is sufficient load the machine may run out of tcp ports and then it won't be able to make another request until a port opens up. The reason for this is the fact that disposing a HttpClient will still keep a tcp connection open in TIME_WAIT status. This then relies on the system configured timeout which is often a couple of minutes. Its the worst kind of bug because it often happens in production when you are under heavy load.
To be honest I have mixed feelings on using something like automapper by default but there are definitely cases when it heavily reduces the amount of code you write and therefore your bugs.
One feature in ASP.NET Core that I was surprised by is the ability to inject a service into a specific action of a controller. Normally you inject into a constructor of a controller and keep the dependencies as local variables. So imagine a controller like this:
If you need to validate the inputs to your api actions I really do recommend trying the FluentValidation.AspNetCore package. Its very easy to integrate and you can do quite powerful validations. As an example I created a Person controller.
One subtle adjustment that some of the older .net developers most probably need to make is that idea of semantic logging in ASP.NET core. This is subtle because a lot of us started used string interpolation a few years ago. The examples in the documentation look very similar to how my string interpolation strings look. See if you can spot the issue with below:
I've spoken previously about IOptions in ASP.NET core. On their own, they are a very useful way to inject configuration into your services. What I didn't mention is that they allow you to build in validation of your configuration. One reason that this might be needed is that the IOptions services are normally very forgiving on missing configuration. If you don't have a value in your appsettings.json then it will just populate with a default value. This could cause serious hidden errors at runtime for your application.
A really interesting trick you can do with the ASP.NET core dependency injection is that you can register an open generic type. The prime example is the ILogger interface that is built in. Each class can inject a ILogger of its type without having to register it specifically. It can be quite a powerful technique in the right circumstance.
In the containerized application and Kubernetes world, quite often you need to create services that aren't traditional rest endpoints. Your application could for example rather listen to a message queue to process requests. It would be nice to still have all the nice infrastructure of .NET Core applications and things like health checks but have it work closer to a windows service.
In the old days of .NET applications we had a simple xml configuration file that would ship with each environment. This was tried and tested and was well understood. Then came Docker. The promise was that I could build a container on my machine and then ship that exact container to production. This is great except for my trusty configuration file. You don't really want to update the file system of the container for each environment it goes through. There must be a better way!
Sometimes when connecting to internal systems you need trusted certificates to be able to authenticate. Generally the base docker image you use when running ASP.NET core applications would not have your internal company certificates. You need a way to deploy these in your container so that your endpoints work.
One of my favorite improvements when moving to ASP.NET Core applications is that configuration feels like a first class citizen in the infrastructure. Previously I often had to do some manual work in the DI registration to create configuration objects that could be injected into my classes. No longer!
About a month ago I made my first commit to the grpc-dotnet project. It was a small change but it was more about the intent for me. Since then I’ve submitted a few other pull requests to that project and also the aspnetcore project. Some have been merged and others are waiting.
The nice thing about .net being opensource is that at any time you can look at the source of interesting classes that you rely on. In this post I look at the ConcurrentDictionary class to get a better idea of how Microsoft writes threadsafe classes. As a recap: generally a normal dictionary is implemented internally as an array of buckets. When you add an element, the hash of the key will get you to the right bucket and then each item in the bucket will be compared for equality to see if it exists. It is for this reason that you need to be careful of the hashing algorithm that you implement because ideally for quicker lookups you want more buckets with less items than a small amount of buckets with many elements. A worst case example would be a dictionary resolving to a single bucket for all its items which will then have a o(n) lookup instead of a o(1) lookup. ConcurrentDictionary is quite a large and complex class and so I will try cover the core parts of it. Central to how it works is in this internal class within it.
To try show some of the financial concepts in practice I've create a library on github to track it - FinanceLibrary. I'm not going to include the tests below but I added tests to each class as I created it. Have a look at the project for a more complete view. The first calculation I want to show is Present Value. To do this I need some initial setup code. I'm going to start by declaring that all instruments have cashflows.
The BlockingCollection is a really powerful thread-safe collection that can be a really powerful tool to use where appropriate. Generally you could consider it where you have a common instance of a class accessed from multiple threads. It gives the class control on how many items are added or taken in a thread-safe way. Calling the Take when the collection is empty or an Add when the collection is full blocks the thread until something is added/removed. So an example is to imagine your class gets data from a wcf client. To improve performance you might want to have multiple shared client instances. A BlockingCollection could really help you there because you could bound it to say max 4 instances of the client contract. Each consuming thread will try take an instance from the collection and add it back once done. If all 4 instances are checked out then the next call will block and wait until there is one available. This is incredibly powerful and it would keep the code maintaining this quite small.
Present value is an important calculation across a bank as its use to calculate a traders positions total mark-to-market. ie: How much are all their trades worth?
For years I heard about RX in .NET but never really found a compelling reason to use it. About a year ago we started using it in my team and let me just say that things have changed. Now I don’t think theres a compelling reason NOT to use it as part of your standard toolset. Its a great tool with a really interesting way to look at things but I'd argue that you should strongly consider using it whenever you are making multi-threaded applications. The reason is that it has a really nice solution to one of the biggest issues with multi-threaded code. Testability. Without a test proving a piece of work can you really say its working as expected? What happens if your elegant code has a bug? How sure are you that your fix to the bug isn't breaking something else? I'm not saying that Rx can/should be used for all multi-threaded code but since its built with testing in mind its a very useful tool in the belt. Let me show you an example.
Over the years I’ve been to a lot of technical interviews on both sides of the table and if there’s one thing I’ve learnt and thats the fact that its really hard to get right. How do you tell if someone will be good at writing good quality code in an hour or 2 of talking to them? Do you even know after a week of them working? Are the people asking about TDD(insert trendy buzz word at the time) because they actually do it or because they want to do it. Do they actually understand what it means?
As a heavy consumer of blogs I’ve eventually decided to start blogging myself. I’ve owned this domain since 2012 and so its been a plan for a while but I could never bring myself to getting around to setting it all up. I think one of my main obstacles was the fact that I knew websites took work to develop and maintain. I’ve created a fair amount of websites and even the simplest ones take time when you have to think about browser support, accessibility and extensibility. I knew most people used CMS’s to create it but I’m a developer. I know my way around css, javascript and html. Why would I waste my time building it on someone elses framework?? So this is where I was stuck for 3 years.
So I’ve finally created my blog after owning a domain since 2012. Most of my posts will be on development problems that I’m interested in but I’d also like to write about financial concepts that I’ve learnt along the way.