The Original Sin in IT

You have a program with human readable output. Now a second program needs that information. There are two choices:

  1. Do the technically right and challenging thing – create a protocol and rewrite the first program and then write the second program (utilizing something like libxo in the first program maybe).
  2. Fuck everybody for the decades to come by deciding that parsing text is the way to go.

I would like to thank everybody involved in choosing number 2!

jc is an effort to fix (more work around) that decision. Thanks to the author and I hope it will make your DevOps life at least a bit more tolerable.

The Pseudo Narrow Waist in Unix

Background

This is a pain-driven response to post about Narrow Waist of Unix Architecture. If you have the time, please read that post.

The (very simplified and rough) TL;DR of the above link:

  1. The Internet has “Narrow Waist”, the IP protocol. Anything that is above that layer (TCP, HTTP, etc), does not need to be concerned with lower level protocols. Each piece of software therefore does not need to concern itself with any specifics of how the data is transferred.
  2. Unix has “Narrow Waist” which is text-based formats. You have a plethora of tools that work with text. On one side of of Narrow Waist we have different formats, on another side text manipulating tools.

I agree with both points. I disagree with implied greatness of the Unix “design” in this regard. I got the impression that my thoughts in this post are likely to be addressed by next oilshell blog posts but nevertheless…

Formats

Like hierarchy of types, we have hierarchy formats. Bytes is the lowest level.

Bytes

Everything in Unix is Bytes. Like in programming languages, if you know the base type, you have a certain set of operations available to you. In case of Bytes in Unix, that would be cp, zip, rsync, dd, xxd and quite a few others.

Text

A sub-type (a more specific type) of Bytes would be Text. Again, like in a programming language, if you know that your are dealing with data of a more specific type, you have more operations available to you. In case of Text in Unix it would be: wc, tr, sed, grep, diff, patch, text editors, etc.

X

For the purposes of this discussion X is a sub-type of Text. CSV or JSON or a program text, etc.

Is JSON a sub-type of Text? Yes, in the same sense that a cell phone is a communication device, a cow is an animal, and a car is a transportation device. Exercise to the reader: are this useful abstractions?

Cow is an animal

The Text Hell

The typical Unix shell approach for working with X are the following steps:

  1. Use Text tools (because they are there and you are proficient wielder)
  2. One of:
    1. Add a bunch of fragile code to bring Text tools to level where they understand enough of X (in some cases despite existing command line tools that deal specifically with X)
    2. Write your own set of tools to deal with the relevant subset of X that you have.
  3. Optional but likely: suffer fixing and extending number 2 for each new “corner case”.

The exception here are tools like jq and jc which continue gaining in popularity (for a good reason in my opinion). Yes, I am happy to see declining number of “use sed” recommendations when dealing with JSON or XML.

Interestingly enough, if a programmer would perform the above mentioned atrocities in almost any programming language today, that person would be pointed out that it’s not the way and libraries should be used and “stop using square peg for round hole”. After few times of unjustified repetition of the same offense, that person should be fired.

Square peg / round hole

Somehow this archaic “Unix is great, we love POSIX, we love Text” approach is still acceptable…

Pipes Text Hell

  1. Create a pipe between different programs (text output becomes text input of the next program)
  2. Use a bunch of fragile code to transform between what first program produces and the second one consumes.

Where Text Abstraction is not Useful

Everywhere almost. In order to do some of the most meaningful/high-level operations on the data, you can’t ignore it’s X and just work like it is Text.

Editing

The original post says that since the format is Text, you can use vim to edit it. Yes you can… but did you notice that any self respecting text editor comes with plugins for various X’s? Why is that? Because even the amount of useful “text editing” is limited when all you know you are dealing with Text. You need plugins for semantic understanding of X in order to be more productive.

Wanna edit CSV in a text editor without CSV plugin? OK. I prefer spreadsheet software though.

Have you noticed that most developers use IDEs that “understand” the code and not Notepad?

Lines Count

Simple, right? wc -l my.csv. Do you know the embedded text in quotes does not have newlines? Oops. Does it have header line? Oops.

Text Replacement

Want to try to rename a method in a Java program? sed -i 's/my_method/our_method/g' *.java, right? Well, depends on your luck. I would highly recommend to do such kind of refactoring using an IDE that actually understands Java so that you rename: only specific method in a specific class as opposed to unfortunately named methods and variables, not to mention arbitrary strings.

Search / Indexing

Yep… except that understanding of the semantics helps here quite a bit. That’s why you have utilities which understand specific programming languages that do the indexing.

Conclusion

I do not understand the fascination with text. Still waiting for any convincing arguments why is it so “great” and why the interoperability that it provides is not largely a myth. Having a set of tools enabling one to do subpar job each time is better than not having them but is it the best we can?

My previous dream of eradicating text where it does not make sense (my blog post from 2009) came true with HTTP/2. Apparently I’m not alone in this regard.

Sorry if anything here was harsh. It’s years of pain.

Clarification – Layering

Added: 2022-02-07 (answering, I hope, https://www.reddit.com/r/ProgrammingLanguages/comments/t2bmf2/comment/hzm7n44/)

Layering in case of IP protocol works just fine. Implementer of HTTP server really does not care about the low level transport details such as Ethernet. Also the low level drivers don’t care which exactly data they deliver. Both sides of the Waist don’t care about each other. This works great!

My claim is that in case of the Text Narrow Waist, where X is on one hand of and the Text tools are on the other, there are two options:

  1. Tools ignore X and you have very limited functionality you get out of the tools.
  2. Tools know about X but then it’s “leaky abstraction” and not exactly a Narrow Waist.

That’s why I think that in case of Text, the Narrow Waist is more of an illusion.


Have a nice week!

On Accidental Serialization Formats

Let’s talk about the “just separate with comma and stick it into one field” type of serialization.

You had two strings (abc and def) and you joined them with a separator. What do you have now? One string with two elements, right? Right, abc,def. Well… two or more actually, depending on how many times the chosen separator occurred in the original strings: if they were a,bc and def, you’ve got a,bc,def, which is 3 elements according to our format. Oops. Leaving out the question whether leading and trailing spaces are significant.

Wanna add escaping for the separator then? a,bc and def are now serialized as a\,bc,def. Now the parsing became more complex. You can’t just split the string by the separator (you would get 3 elements: a\ and bc and def. You need to scan the serialized data, considering escaping when splitting. You also need to remove the escaping character from the result. How about escaping the escape character? If original data is a\bc, it is serialized as a\\bc). Yet another something not to forget.

Don’t like escaping then? How about encoding like in URL? a,bc becomes a%2Cbc. You can now once again split the string by the separator character… assuming it was encoded. Which characters you encode anyway? If you encode all ASCII characters, the result is 3 times the original and is completely unreadable. It least you are “safe” with regards to separator now, it is encoded for sure so no split problems. You have to add a decoding routine now though.

If your serialized thing goes into a database, consider how indexing would work. It probably won’t. Maybe you should model your domain properly in the database and not serialize at all. Hint: if the values ever need to be treated differently/separately by the database, they go into different cells/rows/columns/fields, not one. There are very rare exceptions. Notable exception is the ability of databases to handle JSON fields (examples: MySQL, PostgreSQL). Note that this capability can fit or not fit your use case.

Want to satisfy your artistic needs and do something clever about the serialization? Do it at home then please. Don’t waste time that your colleagues could use on something more productive than dealing with your custom format.

Strong advice: don’t do custom serialization format, use existing serialization formats and libraries.


Seen something to add to the above? Leave a comment!

API-less

I was always complaining about how AWS made APIs that are inconsistent in so many dimensions. AWS teams can’t even (read “don’t care”) to agree on how to name pagination related fields.

Because I am such who was that again? an important person, I hope at least someone everybody in AWS have read my blog and they are checking by mistake it at least twice a day for new stupid rants extreme wisdom.

One of these people said “Fook you, Ilya” and made a service without API. Not kidding. Meet AWS Control Tower. No API, no CLI, no CloudFormation. “Take that, Ilya!”

For some balance I will mention that other people in AWS are trying to do some uniform API. Not stellar at the moment but hey, at least they are trying.

Have a nice $(curl https://api.timeofday.example.com/) !

The Web vs Unix

I would like to share my perspective on what’s wrong with Unix. This time by comparing and contrasting it to the Web.

User Agent

With some amount of stretch, one could say that the equivalent of the browser (user agent) would be a shell with a terminal. You interact with it and it does things for you, like the browser.

User Interface

The web browser is capable of rendering textual and graphical content. Unix shell relies on the terminal (usually emulator mimicking decades old hardware) for user interface and in most cases is limited to fixed width font text.

The main difference is how you can interact with the content. In the shell – you can’t. The shell is out of the game. The content that you see on your screen goes from a program and straight into the terminal, bypassing the shell. Compared to a browser, that sounds insane: you are just unable to interact with the content that you see. Ironically enough, this is happening in what’s called “interactive shell“. Some terminals match the text with predefined set of patterns and allow some minimal interaction such as ability to click on a link to open it in a browser.

The browser is a strict superset when we look at interaction capabilities: you can type in and you can interact with the objects on the screen by clicking them. What an amazing concept! Maybe some day shells will be able to that too! Meanwhile, in the shell, you type your commands and get a dump of text back, with rare exceptions.

I would summarize that the shell is a shitty user agent. 💩

I already can hear the coming “shell is not supposed to do it” argument. My opinion: shell is supposed to do whatever is needed for me to be productive. If it’s your “Unix Philosophy” vs me being productive then you can continue to use Notepad (or ed, the standard text editor, for that matter) and I would be using an IDE, OK?

Layout Engine

They also call it browser engine. That’s because on the web it’s in the browser. But where is it on Unix? Everywhere. Yes, the “Make each program do one thing well” is out of the window long ago. Each program does (hopefully) one thing and then it also does the layout of the output.

Each program has the following main options for handling the input/output:

  1. Primitive output – the program dumps some text on standard output. Let’s include colored text here. It’s just some additional color codes. This is equivalent to not having a layout engine. Sample program in this category: grep.
  2. Interactive UI – the program uses ncurses or similar library. It’s relatively small number of programs.
  3. Layout engine – the program contains some form of a layout engine. This is pretty common. Sample programs in this category: ls, ps, top, diff (columns output), wc, …

Common issues with the “layout engines” causing unpleasant broken view in Unix include:

  1. Improper handling of data which is wider than some hard coded fixed value
  2. Improper handling of Unicode
  3. Failure to accommodate for “unexpected” terminal escape codes in the input (which after processing find their way to the output in utilities like sed)

TCP/IP

Pipe

Let’s talk about pipes. Before everybody gets offended and says pipes is the sacred cow best feature of Unix. Yes, it probably is.

Pipes would roughly correspond to the TCP/IP protocols.  Pipes deliver data. For now, let’s leave alone the fact that they are unidirectional as opposed to TCP, which is bidirectional.

Since the web is a stack of protocols, the obvious question would be how other parts of the stack correspond? Read on.

HTTP

HTTP would correspond to text. Well, mostly text. Sometimes null character separated records. Sometimes something else. That’s the standard “format” to communicate between Unix applications.

“Write programs to handle text streams, because that is a universal interface.” – Basics of the Unix Philosophy.

The original claim is that text is the best for interoperability: large number of utilities have text as input and also output text, manipulating it in many different ways in between. Sounds like a dream. Except in reality this dream turned out to be a nightmare.

Incompatible, ad-hoc formats

Text on Unix is not a single format. It’s a bunch of ad-hoc formats, typically incompatible between different programs. That’s why we have a variety of tools such as sed, cut, awk and alikes. Here is my hot take: these tools are not solutions, they are workarounds. When you don’t have a protocol to communicate between applications, you need a bunch of adapters. Like Sisyphus, one need to write these adapters. All the time. Forever. Text parsing and manipulation feels like core part of Unix. From my perspective it’s an accidental complexity.

On a philosophical note: the “universal interface” should have been a stream of bytes or maybe even bits. I guess it was not found to be very useful. Apparently if you add a line separator character it is good enough to become a recommendation. But why stop there? Maybe add more structure? Maybe accommodate the fact that most of the data is either records with named fields or tables with named columns? Are you sure you counted the columns right for your awk '{print $8}' ?

This is in contrast to HTTP which is spoken by everyone on the Web.

Some Hope

Newer CLIs do usually have an option to output JSON or (less prevalent) YAML. They are forming a new ecosystem with different set of tools. From my perspective, it is proving the point that the “universal interface” might not be that universal and not as productive as envisioned (should I dare and say “unacceptable”?) .

Should it be the half-way structured, aka semi-structured JSON? Is it the sweet spot? I mean why stop here? Maybe we need something with schema? Let me know what you think.

One of the notable projects, jc, is an adapter between the “universal interface” and something that you would actually like to work with.

Shameless Plug

If only we could have a shell that could play a role in this new ecosystem… or maybe even push it a bit in the direction of having semantically meaningful objects on the screen so that interaction would be possible…

Yes, I am aware of other projects solving the same issue. While we mostly agree on the problem, I haven’t yet seen a project which sees the solution the same way as I do.


Happy, DevOps-ing!

Running Elegant bash on Simple Kubernetes (Rant)

I was triggered by seeing “elegant” and “bash” in the same sentence. Here are the titles I suggest the original author to consider for next blog posts:

  1. Guide to Expressive Assembler
  2. Removing Types from Scala
  3. Adding Exceptions to Go
  4. Introduction to Concise Java
  5. Writing Synchronous JavaScript with Threads
  6. Using Forth Without the Stack
  7. Adding Curly Braces to Python
  8. Making Guido Like Functional Programming
  9. Using Uniform AWS APIs
  10. Writing Safe big C Programs
  11. Making C Higher Level Language than Portable Assembler
  12. Making your Database Stateless
  13. Making Eventual Consistency Immediate
  14. How to Know that Backups are Working Without Doing Test Recovery
  15. Finding Quality Code on Random Internet Sites
  16. All Programming Languages are Beautiful (Illustrated)
  17. Writing Bug-Free Code that does not Need Reviews
  18. Learning Modern C++ in 3 Easy Steps in 2 Days
  19. Stopping Hype Around Kubernetes – Practical Guide
  20. Preventing Appearance of new JavaScript Frameworks
  21. Why node_modules is not a Dumpster
  22. Removing Most of the Syntax from Perl
  23. Understanding Monads in 10 Minutes
  24. How to Stop Debates and Fighting around OSS Licensing with 1 Month
  25. Replacing bash in Next 20 Years

P.S. I’m still using bash at some places. Sadly, it’s still the most appropriate solution in some cases, like we live in 90’s.

Ops Tools Marketing Bullshit Dictionary

This article is aimed mostly at juniors. Lacking experience, you are a soft target for marketing bullshit. I encourage critical thinking for evaluation of products and services that are forced down your throat marketed to you.

animal-2599814_640

Background

The purpose of dishonest marketing is to mislead you into using a product and/or a service that you don’t really need. This can waste your time and money.

Separate the products from marketing. The products themselves are not inherently good or bad. The main question is whether to use product X or Y (and how) or code the solution yourself given your specific situation: team, skill levels, requirements, etc. This post will guide you how to see through marketing bullshit when evaluating these products.

The list below is not comprehensive and I might add items later.

Products Marketing Bullshit

( in no particular order )

Cool / coolness

If something is “cool” and you use it, you might write a blog post about it. It has some PR value to you or your company. Other than that, “coolness” of a product has nothing to do with its usefulness for your situation/use case.

Our product is so much better than X

Note that often X will be the worst possible alternative. For example, configuration management tools will most likely be compared to manual work rather to other configuration management tools or any kind of automation. Ignore and compare the suggested product to other viable alternatives.

Use our product or become irrelevant / Our product is [becoming] industry standard

The message is sometimes direct but often it is hidden between the lines. This is an attempt, many times successful, to exploit fear of missing out. Learn the underlying principles then decide whether you want to use one of the products, which come and go. Learning the underlying principles will take more time upfront but you will become more professional. If you are junior Ops, learn Linux and how to use it without any configuration management tools first; learn using the cloud without CloudFormation or Terraform first; see the problems which these tools trying to solve before using the tools.

Our customers include big companies such as X, Y and Z

You have to understand what these companies are using the product/service for. It can be a pilot for example. Check for common owners or investors. Remember that big companies make mistakes too. Anyhow, this is irrelevant for your use case and your situation until proven otherwise.

Success story

The formula is simple: deep shit + our product = great success. Look closely. Often deep shit + other sensible alternative solution would also be great success. For example, configuration management tools show manual process and then how it was automated. Chances are automating using scripts would be other viable alternative.

We abstract all the hard work away from you

You must understand what exactly is abstracted and how. After learning that, it might happen that the perceived value of the product will drop. Skip the learning step and start using the tool and you are in danger: you might need to learn whatever was abstracted in hurry when facing a bug in the tool or be at mercy of someone else to fix it. Remember: abstractions come at cost.

Don’t reinvent the wheel, we have figured out everything already

This exploits your fear of looking stupid. Would you use a spaceship for travelling from Moscow to Tokyo? I guess it would be cheaper to use a plane. Even if tool X solves your problem it might cost you time and complexity and it might be easier to code yourself or use simper tool. There are probably more legitimate reasons not to use tool X.

Companies marketing bullshit

Industry leading company

Who is not? Define your industry narrow enough and you are the leading company!  Simply ignore.


Did I miss something? Let me know here in comments or on Reddit. Have a nice day!

bash or Python? The Square Pegs and a Round Hole Situation

The question “should I do it in bash or in Python?” is both frustrating and common. Why one even needs to choose between two alternatives which are both inadequate for the task at hand? Why try to pick one of the square pegs for the round hole? I believe that you should not be in this annoying situation when just trying to write a script and get back to your endless stream of other todos.

Best illustration that I managed to find in 2 minutes

Both are Inadequate for Ops

bash does not meet any modern expectations for syntax, error handling nor has ability to work with structured data (beyond arrays and associative arrays which can not be nested). Let it go. You are not usually coding in assembly, FORTRAN, C, or C++, do you? They just don’t match the typical Ops tasks. Don’t make your life harder than it should be. Let it go. (Let’s not make it a blanket statement. Use your own judgement when to make an exception).

Python along with many other languages are general purpose programming languages which were not intended to solve specifically Ops problems. The consequence is longer and less readable scripts when dealing with files or running external programs, which are both pretty common for Ops. For example, try to check every status code of every program you run, see how your code looks like. Sure you can import 3rd party library for that. Is that as convenient as having automatic checking by default + list of known programs which don’t return zero + convenient syntax for specifying/overriding expected exit code? I guess not.

Your disorientation and frustration is completely legitimate.

Alternatives

Multitude of attempts to provide viable alternatives by different people are in progress. As others authors, I would like to help my Ops colleagues to avoid frustration and be productive. It just feels good.


Informative section is over. Shameless plug about an alternative that I am developing and how it is special follows.

Next Generation Shell as an Alternative

How Next Generation Shell differs from the alternatives? Reasonable question that I would ask too before investing any more time if I was the reader.

UI

Current shells as well as proposed alternatives treat UI as nothing has happened since the 70-s: mostly typing commands and getting some text back.

How about real interaction with the objects on the screen? Oops. In a typical shell there are no objects on the screen, it’s just a dumpster of combined text (if you are lucky; could be binary) from stdout and from stderr from one or more processes (unless steps were taken). WTF? It doesn’t help you to win. It helps you to lose time. NGS does what is intended to make you productive. I have organized my thoughts about how the UI should look and behave on the wiki page.

Programming language

It looks like alternative solutions have the “let’s make the shell better” approach and therefore are heavily based on the shell syntax and paradigms.

There is also “let’s make a library for existing language” approach which doesn’t hit the target either. Can’t have a syntax for common ops tasks for example.

And finally there is “let’s make a library and a syntax on top of existing language”. Not sure about this one. Sounds good in theory. Looked some time ago at something like this and the overall impression was … awkward (for the lack of better word).

Approach in NGS: let’s make a good programming language for Ops, which fits the use cases and has syntax and facilities for the most common tasks such as running external programs.

The language follows the principle that the most common tasks should have their own syntax or a library function (depending on usage frequency). Examples:

  1. ``external program`` – runs external program and parses the output (JSON is auto detected, easily extensible for anything else).
  2. status(), log(), debug(), retry() – standard library functions. How many times an Ops person should write his/her own retry()? It’s insane.
  3. Argv() facility for constructing command line parameters (for calling external program).
  4. p=$(my_prog my_args &); ....; p.wait().

Small number of “big” core concepts in the language (types with inheritance, multiple dispatch and exceptions).

Dogfooding

Most of the standard library is in NGS.

The UI (only recently started working on it) is in NGS. It doesn’t make sense that when a user of the shell wants to fix a bug in the UI and suddenly he/she needs to learn Go, Rust, C or whatever other language.

We use NGS at work.

Most of the demo scripts come from either current or previous work.

How to proceed?

  1. Install NGS.
  2. Consult documentation and look at sample scripts
  3. Write scripts for non-production-critical tasks.
  4. I am here to help. Do not hesitate to contact me with questions, suggestions, or feedback. If there is anything Ops-y you are trying to do seems to be easier in bash or Python – open an issue, because that’s a bug from my perspective. Something is inconvenient? Yep, also a bug.

If you are like me, you will find at least some satisfaction in using the most appropriate tool before continue to the myriad of other tasks that are in your todo queue.

Or Just Learn More

  1. Is NGS for you? Take a look at intended use cases.
  2. Take a look at how NGS compares to other programming languages.
  3. Browse sample scripts to get some impression about the language.

Reddit: https://www.reddit.com/r/devops/comments/jm3cwe/bash_or_python_the_square_pegs_and_a_round_hole/

“But it works”

TL;DR – this is not nearly good enough in most cases and it’s only small fraction of what you are paid for.

I want this post to be the canonical place to refer people to who say “but it works” because people who explain why this is not OK are tired of repeating the same arguments, me included.

You are paid for …

Following is not an exhaustive list but it should give you some perspective which is opposite from the narrow-minded “but it works”.

Typically, your Software Engineering $Job pays you for:

  1. Of course the thing must work. But also..
  2. It should continue working
    1. Gives deprecation warnings? Probably not good.
    2. Only runs on Node.js v10 LTS which is end of life in less than a year (as of writing)? Think again.
    3. Got away with invalid XML? Can you be sure that the next version of parser won’t be stricter?
  3. It should be maintainable (aka you and other people should find it easy to operate and modify, now and years later)
    1. Code quality
    2. Tests (if you don’t have tests, even your basic claim that something “works” is under suspicion)
    3. Documentation
      1. How to use your sh*t?
      2. How to set up the development environment?
      3. Decisions
      4. Non-obvious code parts
  4. It should be production ready, not abstract “works” or even worse “works on my machine”
    1. Logs
    2. Metrics
    3. Tested in dev/qa/whatever-you-call it environment
    4. Reproducible – tomorrow they make a new environment, “qa42”, in a different AWS account in a different region. Could somebody else deploy your sh*t there without talking to you?
    5. Update 2020-09-13 (from Guy Egozy) – Scalable enough to be used in production.

If you claim that you are “done” because “it works”, congratulations, you have a (probably) working prototype. That’s typically small part of a project.


Related term – “Tactical Tornado”, look this up.


Update 2020-09-13: Reddit discussion

Python 3.8 Makes me Sad Again

Looking at some “exciting” features landing in Python 3.8, I’m still disappointed and frustrated by the language… like by quite a few other languages.

As an author of another programming language, I can’t stop thinking about how things “should have been done” from my perspective. I want to be explicit here. My perspective is biased towards correctness and “WTF are you doing?”. Therefore, take everything here with a appropriate amount of salt.

Yes, not talking about any “positive” changes here.

Assignment Expressions

There is new syntax := that assigns values to variables as part of a larger expression.

A fix which couldn’t be the best because of previous design decision.

“Somebody” ignored the wisdom of Lisp, which was “everything is an expression and evaluates to a value” (no statements vs expressions), and made assignment a statement in Python years ago. Now this can not be fixed in a straightforward manner. It must be another syntax. Two different syntaxes for almost the same thing which is = for assignment as a statement and := for expression assignment.

Positional-only Parameters

There is a new function parameter syntax / to indicate that some function parameters must be specified positionally and cannot be used as keyword arguments:

def f(a, b, /, c, d, *, e, f):
    print(a, b, c, d, e, f)

Trying to clean up a mess created by mixing positional and named parameters. Unfortunately I did not give it enough thought at the time and copied parameters handling behaviour from Python. Now NGS also has the same problem as Python had before 3.8. Hopefully, I will be able to fix it in some more elegant way than Python did.

LRU cache

functools.lru_cache() can now be used as a straight decorator rather than as a function returning a decorator. So both of these are now supported

OK. Bug fix. But … (functools.py)

    if isinstance(maxsize, int):
        # Negative maxsize is treated as 0
        if maxsize < 0:
            maxsize = 0

If you are setting LRU cache size to a negative number, it’s 99% by mistake. In NGS that would be an exception. That’s the approach that causes rm -rf $myfolder/ to remove / when myfolder is unset. Note that the maxsize code is not new but it’s still there in Python 3.8. I guess that is another mistake which can not be easily fixed now because that would break “working” code.

Collections

The _asdict() method for collections.namedtuple() now returns a dict instead of a collections.OrderedDict. This works because regular dicts have guaranteed ordering since Python 3.7

OK. Everybody had the mistake of making maps unordered: Perl, Ruby, Python.

  1. Ruby fixed that with the release of version 1.9 in 2008 (according to the post).
  2. Python fixed that with the release of version 3.7 in 2018 (which I take as 10 years of “f*ck you, the developer”).
  3. Perl keeps using unordered maps according to documentation.
  4. Same for Raku, again according to the documentation.

NGS had ordered maps from the start but that’s not a fair comparison because NGS project started in 2013, when the mistake was already understood.


How all that helps you, the reader? I encourage deeper thinking about the choice of programming languages that you use. From my perspective, all languages suck, while NGS aims to suck less than the rest for the intended use cases (tl;dr – for DevOps scripting).


Update 2020-08-16

Discussions:

  1. https://news.ycombinator.com/item?id=24176823
  2. https://lobste.rs/s/rgcgjz/python_3_8_makes_me_sad_again

Update 2020-08-17

It looks like the article above needs some clarification about my perspective: background, what I am doing and why.

TL;DR

The main points of the article are:

  1. Everything still sucks, including Python. By sucks I mean does not fit well with the tasks I need to do neither aligned with how I think about these tasks.
  2. I am trying to help the situation and the industry by developing my own programming language

Background about my Thinking

In general, I’m amazed with how bad the overall state of programming is. That includes:

  1. All programming languages that I know including my own NGS. This is aggravated by inability to fix anything properly for any language with substantial amount of code written in it because you will be breaking existing code. And if you do break, you get the shitstorm like with Python 3 or Perl 6 (Raku).
  2. Code quality of the programs written in all languages. Most of the code that I have seen is bad. Sometimes even in official examples.
  3. Quality of available materials, which are sometimes plainly wrong.
  4. Many of existing “Infrastructure as code” solutions, which in most cases follow the same path:
    1. Invent a DSL or use YAML.
    2. “figure out” later that it’s not powerful enough (by the way there is an elegant solution – a programming language, forgot the name)
    3. Create pretty ugly programming language on top of a DSL that was intended for data.

I am creating new programming language and a shell out of frustration with current situation, especially with bash and Python. Why these two? Because that’s what I was and still using to get my tasks done.

Are these languages bad? I don’t think it’s a question with any good answers. These languages don’t fit the tasks that I’m trying to do nor are aligned with how I think while being apparently one of the best choices available.

This Article Background

  1. Seen some post on RSS about new features in Python 3.8.
  2. Took a look.
  3. Yep, everything is still f*cked up.
  4. Wrote a post about it which was not meant to be “deep discussion about Python flaws”.

I was not planning to invest more time in this but here I am trying to clarify.

And your Language is Better? Really?

Let’s clarify “better”. For me, it’s to suck less than the rest for the intended use cases.

author really does consider himself a superior language designer than the Python core-dev team

( From https://www.reddit.com/r/Python/comments/iartgp/python_38_makes_me_sad_again/ )

I consider myself in much easier circumstances:

  1. No substantial amount of code is written in NGS yet.
  2. I’m starting later and therefore have the advantage of looking at more languages, avoiding bad parts, copying (with adaptation) the good parts.
  3. NGS targets a niche, it’s not intended to be general purpose language. Choices are clearer and easier when you target a niche.
  4. The language that I’m creating is almost by definition is more aligned with how I think. Hoping that people out there will benefit from using NGS if it is more aligned with how they think too.
  5. See also my Creating a language is easier now (2016) post.

Will I be able to make a “better” language?

From technical perspective, that’s probable: I am a skilled programmer in several languages and I have languages to look at more than everybody else had before. My disadvantage is not much experience in language design. I’m trying to offset that with thinking hard (about the language, the essence of what is being expressed, common patterns, etc), looking at other languages and experimenting.

From marketing perspective, I need to learn a lot. I am aware that “technically better” doesn’t matter as much as I would like to. Without community and users that would be a failed project.

Also don’t forget luck which I might or might not have.

What if NGS fails?

I think that the situation today is unbearable. I’m trying to fix it. I feel like I have to, despite the odds. I hope that even if NGS fails to move the industry forward it would be useful to somebody who will attempt that later.