I would like to share my thoughts #9853
Replies: 4 comments 31 replies
-
|
Question 1. Concerning protecting users from basic errors. I found that C# went against the core principles of programming, which can be summed up as "implicit conversions cannot be performed if they result in data loss." I understand that the user can override implicit conversion operators and create chaos in their code, but they are limited by the fact that if the language defines two or more possible implicit conversions at once, it requires explicitly specifying which type to cast to. Nevertheless, all languages generally strictly define the behavior of base types and prohibit data loss. But I discovered that C# has its own strange logic and behavior with implicit conversions, and here's an example. Moreover, I agree with any rules as long as they work the same everywhere (they can be memorized and worked with, just like when we multiply an integer by a floating-point number—we immediately understand that the result will be a floating-point number with a loss of precision). This also includes another problem I discovered. It turns out that C#, unlike C++, allows you to convert integers to floating-point or decimal numbers implicitly, allowing for data loss. (I'm not sure about decimal, since it takes up 16 bytes and I don't use it at all.) But that's barely half the problem. It turns out that C# uses some kind of internal code optimization, which, even in debug mode, deceives the user and thinks that long and double are the same and compares them as two equal numbers (I'm not sure about the entire range of values; perhaps there is a margin of error at which it considers the numbers equal to each other, but nevertheless, when converting long -> double -> long, an optimization occurs that makes C# think that no conversion took place.) I learned about this by accident when I simply wanted to show with an example that double cannot store the entire range of integers from the long type. (Not to mention float). |
Beta Was this translation helpful? Give feedback.
-
|
Question 2. This is working with enumerations. Without referring to how enumerations work in C#, we can say that they are simply a named range of values that we expect to see in a variable. Therefore, any enumeration is simply syntactic sugar, and the variable remains the same variable, the same type. This can be related to point 4 about the compilation phase. However, if we consider the possibility of inferring the enumeration name associated with its value, simply replacing methods and references won't work. We need to create an entity that stores the names of all values and the methods for working with these values. I even tried creating something myself. And I would have created it if I hadn't encountered some language limitations, which would have required using reflection and so on, meaning a performance penalty. And most importantly, I would have had to manually create this entity each time, along with all its unique methods for a specific type under a wrapper. Nevertheless, I know of several working examples of metaprogramming in C#, where the language itself creates entities and methods. For example, the Records type. But instead of automatically creating a static type at compile time and getting a powerful helper in the form of an enumeration, C#, for some reason, has created and is reluctant to create anything new other than the Enum class. Not only can this type not be used for arithmetic generics, it's also subject to constant boxing/unboxing, and it lacks basic methods for working with [flags] (which it supposedly works with). Examples of this discussion can also be found here. |
Beta Was this translation helpful? Give feedback.
-
|
Question 3. While studying collections and the cases where they can be used, I discovered that almost all C++ collections, in one way or another, operate internally on the principle of trees or linked lists. I've thought long and hard about when linked lists are truly useful. The seemingly obvious advantage of fast insertion and deletion is negligible in those cases (in most cases) where we first need to figure out exactly where to insert or delete an element, which means performing a sequential search along the chain and finding a pointer to the desired link. However, most everyday operations require fast searches, quickly reviewing all values, and so on. I think linked lists are useful in a very narrow range of tasks or when working under resource constraints. At first, I didn't understand why C# practically doesn't have collections based on linked lists. Now I understand, because the cost of allocating new memory or copying all the values when expanding the list isn't that great, but the lightweight nature and fast iteration come with it. But something else surprised me. Even C# doesn't have collections based on binary search. Or rather, there are one or two, but they're very slow and unproductive. I tried creating my own list with binary search, which I think is the fastest sorting method (if you simply dump all the values into a new binary list). And it didn't even go badly. But in the end, I came to the conclusion that C#'s SortedList works like a collection with a Key-Value pair—and that seemed right. I even considered rewriting my code to match SortedList. But I realized I had another problem, and it's called "I can't ensure Key immutability." This applies to class-based types, as well as structures that store reference data types. After all, even restricting the "Key" type to the IClonable interface doesn't guarantee the user will implement it correctly. It turns out that the user is solely responsible for keeping the fields used for comparison immutable. This is fraught with hidden errors, and this isn't really a high-level programming issue. As I understand it, int GetHashCode() also doesn't fully protect against modification, even though the hash itself is stored in a base structure type, which ensures its immutability and the ability to be perfectly cloned without errors. If there are collisions in hash tables (of which C# is full), a comparison still occurs, and I don't understand how hash tables are protected against this. But most importantly, I realized that C# is sorely lacking ways to constrain its structures during declaration and creation to ensure the functionality of its own data types and collections and protect itself from errors. Therefore, I believe the language needs some basic interfaces that will be implemented automatically through metaprogramming at compile time to ensure 100% implementation of these interfaces. For example, I want to create an immutable key. I'll have to implement the IClonable interface to store a copy of the key, ensuring its immutability (which is critical for binary sorting). A user might implement it incorrectly by simply leaving a reference to a reference data type that will be modified. The error won't be immediately apparent and could have disastrous consequences, and this is without even mentioning the loss of productivity. Therefore, I'll have to use ReadOnly data types. But you probably know that in C#, readonly can only ensure immutability for value types. Working with reference types is also easy to make a mistake if you don't understand how it works. Nevertheless, I'd really like to see a collection based on my collection, or an optimization of the existing SortedList. In my opinion, binary collections can replace small hash tables (as people often use them to store and access even small collections by key due to lack of alternatives), and when sorting, ordering, and fast access are needed simultaneously, they're even better than hashes and trees. You can create hybrid collections based on binary lists. I think many agree that binary lists are slow to edit, but check out my solution; you might find it elegant. |
Beta Was this translation helpful? Give feedback.
-
|
The general set of thoughts shared here seem to have a lot of misinformation and general misunderstanding. Many of the points raised are either personal opinion or are trivially disproven with a simple web search. For example, just doing Likewise there is a large number of incorrect statements about how other languages work, particularly C++, and conflating of different parts of the various ecosystems such as what is language vs libraries vs runtime. I'd suggest doing some deeper research on these topics and potentially engaging in general public forums around the languages to get such questions answered, rather than making presumptions about how things operate. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm more of a theorist than a practitioner. But by studying various literature on programming, I've gleaned a few so-called "language ideals" to strive for.
I was initially hostile to C# when I first became interested in it 10 years ago. I didn't see enough advantages to use it over C++.
However, after returning to it 10 years later and seeing how it's evolving, I became interested, and I'm programming in it as a hobby. Therefore, everything I'll describe below may not be a problem at all, but in my opinion, it's important when considering "where we're going."
The point is that a programming language, as such, serves several basic functions:
So, after a bit of practice, I started asking myself questions about why and how this works in C#.
Beta Was this translation helpful? Give feedback.
All reactions