Functional programming has been around since the late 1950’s, based on lambda calculus concepts dating back 100 years. It has been embraced by just about every modern language, because of guarantees it offers. Immutability is one property of functional programming, where functions never change a variable, but instead return a new one. When an function […]Read more "Computer Architecture is Non-Functional"
My earlier Faster Sorting in C# blog described a Parallel Merge Sort algorithm, which scaled well from 4-cores to 26-cores, running from 4X faster to 20X faster respectively than the standard C# Linq.AsParallel().OrderBy. In this blog, I’ll describe an even faster Parallel Merge Sort implementation – by another 2X. Performance of the New Approach C# […]Read more "Even Faster Sorting in C#"
What Is It? Computing constantly provides space-time trade-offs. To make an algorithm faster, more space can be used. Or, if space is at a premium, then a slower and more space efficient algorithm can be used. This kind of a trade-off occurs in every aspect of computing, such as software development and chip design. In […]Read more "To In-Place or To Not-In-Place"
The number of cores in modern processor is growing. For quite a while we were stuck at 2 or 4 cores. But, lately the number of cores have grown to 6 and 8 even for consumer processors, and to 14 cores for Intel Xeon workstation and cloud CPUs, and 32 cores for AMD desktop and […]Read more "Parallelism: Perform No Worse"
I wrote a blog “Faster List.ToArray() and Copying in C#” a while back, which showed several ways to copy from List to Array faster. One of these ways was implemented in the HPCsharp nuget package and increased performance by 4X using multiple processor cores. In this blog I’ll show how to do the same for […]Read more "Faster Array.ToArray() and Copying in C#"
Standard deviation is one of the basic tools within a statistician’s toolchest, to measure variability within a data set. The basic formula can be found on wikipedia. Within the formula, one of the summations takes every data point subtracts the average value and then squares it. This blog will explore how this squaring affects standard […]Read more "How Standard Deviation Measures Warped Data"
Standard deviation is a core statistical algorithm used to measure variability of a data set. It is used in data science extensively, to provide useful information about the data. The computation itself uses summation twice within the algorithm: once to compute the mean (average) of the data set, and another to sum the square of […]Read more "Parallel Standard Deviation"
We think of computer memory as a RAM – random access memory – which means memory can be accessed at any location with nearly the same latency. This is true of certain kinds of memory types, such as static RAM – SRAM. As we shall see in this blog, when system memory is made of […]Read more "Memory Access"
This blog is a slightly different kind of a cheat sheet. It is based on common tasks that we do using git command line. Hopefully, this blog will reduce your memorization overload. To Clone a Repository git clone https://repositoryName.git grab the repository.git part from the git web UI for cloning a repository. This command will […]Read more "Git by Task"
I’ve taken several attempts at parallelizing the LSD Radix Sort algorithm. This is a conceptual description of the latest attempt, which hopefully will work out well. My latest implementation of partially parallel version of LSD Radix Sort is performing very well, running at around 150 MegaInt32/sec implemented in C++ and nearly the same speed in […]Read more "Parallel LSD Radix Sort"