AWS Cloud Control is in for Review

Since 2016 I have worked on AWS library. On 2021-09-30, AWS released something comparable, AWS Cloud Control API. If it’s comparable, let’s compare. I hope the reader will find this unique perspective interesting.

Purpose

AWS Cloud Control provides uniform API to Create, Read, Update, Delete and List different resources. In short, CRUDL.

My library aims to provide uniform simplified declarative interface where the main functionality is to converge to desired state.

Supported Resources

Cloud Control supports much more resources. My library supports very few, the ones that I needed the most for my work. It’s just how experimental library by one person in spare time compares with a product from AWS.

Mode of Operation

Cloud Control

Operates on one resource at a time.

Cloud Control provides relatively low level API, CRUDL. The advantage of Cloud Control over existing APIs is the uniformity, not achieving a desired state. You either create or update, and you need to know which one you are doing. (Deletion is a different story).

I am amazed that somebody at AWS was able to pull this off… where different services don’t agree on naming conventions for tagging resources, how and what can be tagged, and don’t have a convention for naming pagination fields. (Which by the way is nightmare for the users but typically abstracted by Terraform or CloudFormation). Sorry for the rant.

The operations are performed asynchronously. You have an API to query the status: GetResourceRequestStatus. This approach is better for advanced use cases, where multiple resources are being created simultaneously. (Like CloudFormation?)

The operations are idempotent. This is achieved using client token, as in other AWS services.

My Library

Operates either on one resource or a set of resources of the same type at a time. Edit: … from default, specified , or all regions.

The library looks up the resources and either creates or updates them, depending on whether they exist or not.

The operations are performed synchronously. The library throws exception if anything goes wrong. I find this approach much more user friendly for basic use cases.

Idempotency is out of the picture. Should it be in? Probably. Looks like omission on my side. How to best shield the user of the library from this issue? Don’t know yet. Need to think.

Describing Desired State

Cloud Control

The “create” API has desired state parameter: DesiredState.

The “update” API doesn’t, which I find very strange. I thought that desired state is something to be achieved no matter what the current state is. Desired state is only used in the “create” API so from my perspective it could have been called “properties” or whatever.

The “update” API has PatchDocument parameter, which I find not user friendly for the following reasons:

  1. Who actually works with JSON patch format? It’s the first time I see that I need to use it.
  2. I think it is less mentally challenging to describe … desired state and not the delta. This is typically done by IaC tools, including CloudFormation: calculate the diff between current and desired state for the user, so that the user would not need to get into this.
  3. It makes the update inconsistent with create.

My Library

There is no separate create and update. The user specifies the desired state as a parameter to the “converge” function. Converge then either creates or updates the resource / resources. The (non existent) “create” and “update” are therefore completely uniform in the library.

Search / Filtering

Cloud Control

Search is not supported. In practice it means listing all the resources and filtering the result on the client side. Typically that can be done in AWS CLI with the --query flag which is supported globally, for any AWS CLI command. Unfortunately I don’t see a way to make it work in this situation. The returned result has ResourceDescriptions field, an array where each item has Properties field, the Properties field is a string (JSON). Apparently JMESPath does not support parsing JSON in this situation. This means that the output of Cloud Control AWS CLI will be piped to jq or maybe a programming language for filtering and/or further processing.

My Library

While the number of supported resources and filters is low, the library supports filtering. The filtering is done on the server or on the client, completely transparent to the users of the library. What’s done on the server and what’s on the client? Simple – when filtering a given property is supported on the server side, it’s done there.

Desired State Format

Cloud Control

Cloud Control uses CloudFormation syntax. This makes sense.

My Library

My library uses the same format as you would see when using AWS CLI to describe the resource. It allows access to properties that CloudFormation does not have, and is unlikely to have, so for example this works:

  1. instance = AWS::Instance(...).converge(State = 'running') — which creates or turns on the specified EC2 instance / instances. Turning on or off EC2 instances is not supported in CloudFormation.
  2. vpc = AWS::Vpc(IsDefault=true).expect(1) — get a reference to default VPC (to use in further operations).

Closing Thoughts

  1. Cloud Control looks like a step in the right direction.
  2. Having Properties as a string is a major ergonomic issue.
  3. JSON patch for update is a huge ergonomic issue.
  4. Search (filtering in List) functionality is missing.
  5. “Desired state” naming is unjustified.

Cloud Control is a big effort, therefore let’s give the team some slack and see how the API is improved with time? Hopefully soon 🙂


Have a nice day, evening, or night! Mornings are just statistically less nice…

Failed Stealing from Python

I made a mistake. Hope you will learn something from it.

Mental Shortcuts

Heuristic, tl;dr for the purposes of this article – mental shortcut. The brain chooses to do less thinking to save energy. It relies on simple rules to get the result. The result is correct… some times.

I took a mental shortcut when working on my own programming language, Next Generation Shell. It was a mistake.

Additionally, I have ignored the uneasy but very vague feeling that what I’m doing is not 100% correct. From my experience I knew I shouldn’t ignore it but I still did it. Another mistake.

I “thought”

Below are heuristics that led to the wrong decision.

Copying features from popular languages is pretty “safe”. After all, “everybody” is using the language and it’s OK. Social proof. Wrong. Everybody does mistakes. Popular languages have design issues too.

It’s OK to copy a feature because it’s very basic aspect of a language. Nope. Python messed up arguments passing. And I copied that mess.

The Fix

Python 3.8 has the fix. I have mixed feelings about it. Still not sure how I should fix it in NGS.

Takeaway

Beware of mental shortcuts. There are situations where these are not acceptable. The tricky part is to detect that you are using a mental shortcut in a situation where it’s not appropriate. I hope that with awareness and practice we can do it.

Also note that your $job is most likely paying you to not take mental shortcuts.

The Web vs Unix

I would like to share my perspective on what’s wrong with Unix. This time by comparing and contrasting it to the Web.

User Agent

With some amount of stretch, one could say that the equivalent of the browser (user agent) would be a shell with a terminal. You interact with it and it does things for you, like the browser.

User Interface

The web browser is capable of rendering textual and graphical content. Unix shell relies on the terminal (usually emulator mimicking decades old hardware) for user interface and in most cases is limited to fixed width font text.

The main difference is how you can interact with the content. In the shell – you can’t. The shell is out of the game. The content that you see on your screen goes from a program and straight into the terminal, bypassing the shell. Compared to a browser, that sounds insane: you are just unable to interact with the content that you see. Ironically enough, this is happening in what’s called “interactive shell“. Some terminals match the text with predefined set of patterns and allow some minimal interaction such as ability to click on a link to open it in a browser.

The browser is a strict superset when we look at interaction capabilities: you can type in and you can interact with the objects on the screen by clicking them. What an amazing concept! Maybe some day shells will be able to that too! Meanwhile, in the shell, you type your commands and get a dump of text back, with rare exceptions.

I would summarize that the shell is a shitty user agent. 💩

I already can hear the coming “shell is not supposed to do it” argument. My opinion: shell is supposed to do whatever is needed for me to be productive. If it’s your “Unix Philosophy” vs me being productive then you can continue to use Notepad (or ed, the standard text editor, for that matter) and I would be using an IDE, OK?

Layout Engine

They also call it browser engine. That’s because on the web it’s in the browser. But where is it on Unix? Everywhere. Yes, the “Make each program do one thing well” is out of the window long ago. Each program does (hopefully) one thing and then it also does the layout of the output.

Each program has the following main options for handling the input/output:

  1. Primitive output – the program dumps some text on standard output. Let’s include colored text here. It’s just some additional color codes. This is equivalent to not having a layout engine. Sample program in this category: grep.
  2. Interactive UI – the program uses ncurses or similar library. It’s relatively small number of programs.
  3. Layout engine – the program contains some form of a layout engine. This is pretty common. Sample programs in this category: ls, ps, top, diff (columns output), wc, …

Common issues with the “layout engines” causing unpleasant broken view in Unix include:

  1. Improper handling of data which is wider than some hard coded fixed value
  2. Improper handling of Unicode
  3. Failure to accommodate for “unexpected” terminal escape codes in the input (which after processing find their way to the output in utilities like sed)

TCP/IP

Pipe

Let’s talk about pipes. Before everybody gets offended and says pipes is the sacred cow best feature of Unix. Yes, it probably is.

Pipes would roughly correspond to the TCP/IP protocols.  Pipes deliver data. For now, let’s leave alone the fact that they are unidirectional as opposed to TCP, which is bidirectional.

Since the web is a stack of protocols, the obvious question would be how other parts of the stack correspond? Read on.

HTTP

HTTP would correspond to text. Well, mostly text. Sometimes null character separated records. Sometimes something else. That’s the standard “format” to communicate between Unix applications.

“Write programs to handle text streams, because that is a universal interface.” – Basics of the Unix Philosophy.

The original claim is that text is the best for interoperability: large number of utilities have text as input and also output text, manipulating it in many different ways in between. Sounds like a dream. Except in reality this dream turned out to be a nightmare.

Incompatible, ad-hoc formats

Text on Unix is not a single format. It’s a bunch of ad-hoc formats, typically incompatible between different programs. That’s why we have a variety of tools such as sed, cut, awk and alikes. Here is my hot take: these tools are not solutions, they are workarounds. When you don’t have a protocol to communicate between applications, you need a bunch of adapters. Like Sisyphus, one need to write these adapters. All the time. Forever. Text parsing and manipulation feels like core part of Unix. From my perspective it’s an accidental complexity.

On a philosophical note: the “universal interface” should have been a stream of bytes or maybe even bits. I guess it was not found to be very useful. Apparently if you add a line separator character it is good enough to become a recommendation. But why stop there? Maybe add more structure? Maybe accommodate the fact that most of the data is either records with named fields or tables with named columns? Are you sure you counted the columns right for your awk '{print $8}' ?

This is in contrast to HTTP which is spoken by everyone on the Web.

Some Hope

Newer CLIs do usually have an option to output JSON or (less prevalent) YAML. They are forming a new ecosystem with different set of tools. From my perspective, it is proving the point that the “universal interface” might not be that universal and not as productive as envisioned (should I dare and say “unacceptable”?) .

Should it be the half-way structured, aka semi-structured JSON? Is it the sweet spot? I mean why stop here? Maybe we need something with schema? Let me know what you think.

One of the notable projects, jc, is an adapter between the “universal interface” and something that you would actually like to work with.

Shameless Plug

If only we could have a shell that could play a role in this new ecosystem… or maybe even push it a bit in the direction of having semantically meaningful objects on the screen so that interaction would be possible…

Yes, I am aware of other projects solving the same issue. While we mostly agree on the problem, I haven’t yet seen a project which sees the solution the same way as I do.


Happy, DevOps-ing!

Running Elegant bash on Simple Kubernetes (Rant)

I was triggered by seeing “elegant” and “bash” in the same sentence. Here are the titles I suggest the original author to consider for next blog posts:

  1. Guide to Expressive Assembler
  2. Removing Types from Scala
  3. Adding Exceptions to Go
  4. Introduction to Concise Java
  5. Writing Synchronous JavaScript with Threads
  6. Using Forth Without the Stack
  7. Adding Curly Braces to Python
  8. Making Guido Like Functional Programming
  9. Using Uniform AWS APIs
  10. Writing Safe big C Programs
  11. Making C Higher Level Language than Portable Assembler
  12. Making your Database Stateless
  13. Making Eventual Consistency Immediate
  14. How to Know that Backups are Working Without Doing Test Recovery
  15. Finding Quality Code on Random Internet Sites
  16. All Programming Languages are Beautiful (Illustrated)
  17. Writing Bug-Free Code that does not Need Reviews
  18. Learning Modern C++ in 3 Easy Steps in 2 Days
  19. Stopping Hype Around Kubernetes – Practical Guide
  20. Preventing Appearance of new JavaScript Frameworks
  21. Why node_modules is not a Dumpster
  22. Removing Most of the Syntax from Perl
  23. Understanding Monads in 10 Minutes
  24. How to Stop Debates and Fighting around OSS Licensing with 1 Month
  25. Replacing bash in Next 20 Years

P.S. I’m still using bash at some places. Sadly, it’s still the most appropriate solution in some cases, like we live in 90’s.

Ops Tools Marketing Bullshit Dictionary

This article is aimed mostly at juniors. Lacking experience, you are a soft target for marketing bullshit. I encourage critical thinking for evaluation of products and services that are forced down your throat marketed to you.

animal-2599814_640

Background

The purpose of dishonest marketing is to mislead you into using a product and/or a service that you don’t really need. This can waste your time and money.

Separate the products from marketing. The products themselves are not inherently good or bad. The main question is whether to use product X or Y (and how) or code the solution yourself given your specific situation: team, skill levels, requirements, etc. This post will guide you how to see through marketing bullshit when evaluating these products.

The list below is not comprehensive and I might add items later.

Products Marketing Bullshit

( in no particular order )

Cool / coolness

If something is “cool” and you use it, you might write a blog post about it. It has some PR value to you or your company. Other than that, “coolness” of a product has nothing to do with its usefulness for your situation/use case.

Our product is so much better than X

Note that often X will be the worst possible alternative. For example, configuration management tools will most likely be compared to manual work rather to other configuration management tools or any kind of automation. Ignore and compare the suggested product to other viable alternatives.

Use our product or become irrelevant / Our product is [becoming] industry standard

The message is sometimes direct but often it is hidden between the lines. This is an attempt, many times successful, to exploit fear of missing out. Learn the underlying principles then decide whether you want to use one of the products, which come and go. Learning the underlying principles will take more time upfront but you will become more professional. If you are junior Ops, learn Linux and how to use it without any configuration management tools first; learn using the cloud without CloudFormation or Terraform first; see the problems which these tools trying to solve before using the tools.

Our customers include big companies such as X, Y and Z

You have to understand what these companies are using the product/service for. It can be a pilot for example. Check for common owners or investors. Remember that big companies make mistakes too. Anyhow, this is irrelevant for your use case and your situation until proven otherwise.

Success story

The formula is simple: deep shit + our product = great success. Look closely. Often deep shit + other sensible alternative solution would also be great success. For example, configuration management tools show manual process and then how it was automated. Chances are automating using scripts would be other viable alternative.

We abstract all the hard work away from you

You must understand what exactly is abstracted and how. After learning that, it might happen that the perceived value of the product will drop. Skip the learning step and start using the tool and you are in danger: you might need to learn whatever was abstracted in hurry when facing a bug in the tool or be at mercy of someone else to fix it. Remember: abstractions come at cost.

Don’t reinvent the wheel, we have figured out everything already

This exploits your fear of looking stupid. Would you use a spaceship for travelling from Moscow to Tokyo? I guess it would be cheaper to use a plane. Even if tool X solves your problem it might cost you time and complexity and it might be easier to code yourself or use simper tool. There are probably more legitimate reasons not to use tool X.

Companies marketing bullshit

Industry leading company

Who is not? Define your industry narrow enough and you are the leading company!  Simply ignore.


Did I miss something? Let me know here in comments or on Reddit. Have a nice day!

bash or Python? The Square Pegs and a Round Hole Situation

The question “should I do it in bash or in Python?” is both frustrating and common. Why one even needs to choose between two alternatives which are both inadequate for the task at hand? Why try to pick one of the square pegs for the round hole? I believe that you should not be in this annoying situation when just trying to write a script and get back to your endless stream of other todos.

Best illustration that I managed to find in 2 minutes

Both are Inadequate for Ops

bash does not meet any modern expectations for syntax, error handling nor has ability to work with structured data (beyond arrays and associative arrays which can not be nested). Let it go. You are not usually coding in assembly, FORTRAN, C, or C++, do you? They just don’t match the typical Ops tasks. Don’t make your life harder than it should be. Let it go. (Let’s not make it a blanket statement. Use your own judgement when to make an exception).

Python along with many other languages are general purpose programming languages which were not intended to solve specifically Ops problems. The consequence is longer and less readable scripts when dealing with files or running external programs, which are both pretty common for Ops. For example, try to check every status code of every program you run, see how your code looks like. Sure you can import 3rd party library for that. Is that as convenient as having automatic checking by default + list of known programs which don’t return zero + convenient syntax for specifying/overriding expected exit code? I guess not.

Your disorientation and frustration is completely legitimate.

Alternatives

Multitude of attempts to provide viable alternatives by different people are in progress. As others authors, I would like to help my Ops colleagues to avoid frustration and be productive. It just feels good.


Informative section is over. Shameless plug about an alternative that I am developing and how it is special follows.

Next Generation Shell as an Alternative

How Next Generation Shell differs from the alternatives? Reasonable question that I would ask too before investing any more time if I was the reader.

UI

Current shells as well as proposed alternatives treat UI as nothing has happened since the 70-s: mostly typing commands and getting some text back.

How about real interaction with the objects on the screen? Oops. In a typical shell there are no objects on the screen, it’s just a dumpster of combined text (if you are lucky; could be binary) from stdout and from stderr from one or more processes (unless steps were taken). WTF? It doesn’t help you to win. It helps you to lose time. NGS does what is intended to make you productive. I have organized my thoughts about how the UI should look and behave on the wiki page.

Programming language

It looks like alternative solutions have the “let’s make the shell better” approach and therefore are heavily based on the shell syntax and paradigms.

There is also “let’s make a library for existing language” approach which doesn’t hit the target either. Can’t have a syntax for common ops tasks for example.

And finally there is “let’s make a library and a syntax on top of existing language”. Not sure about this one. Sounds good in theory. Looked some time ago at something like this and the overall impression was … awkward (for the lack of better word).

Approach in NGS: let’s make a good programming language for Ops, which fits the use cases and has syntax and facilities for the most common tasks such as running external programs.

The language follows the principle that the most common tasks should have their own syntax or a library function (depending on usage frequency). Examples:

  1. ``external program`` – runs external program and parses the output (JSON is auto detected, easily extensible for anything else).
  2. status(), log(), debug(), retry() – standard library functions. How many times an Ops person should write his/her own retry()? It’s insane.
  3. Argv() facility for constructing command line parameters (for calling external program).
  4. p=$(my_prog my_args &); ....; p.wait().

Small number of “big” core concepts in the language (types with inheritance, multiple dispatch and exceptions).

Dogfooding

Most of the standard library is in NGS.

The UI (only recently started working on it) is in NGS. It doesn’t make sense that when a user of the shell wants to fix a bug in the UI and suddenly he/she needs to learn Go, Rust, C or whatever other language.

We use NGS at work.

Most of the demo scripts come from either current or previous work.

How to proceed?

  1. Install NGS.
  2. Consult documentation and look at sample scripts
  3. Write scripts for non-production-critical tasks.
  4. I am here to help. Do not hesitate to contact me with questions, suggestions, or feedback. If there is anything Ops-y you are trying to do seems to be easier in bash or Python – open an issue, because that’s a bug from my perspective. Something is inconvenient? Yep, also a bug.

If you are like me, you will find at least some satisfaction in using the most appropriate tool before continue to the myriad of other tasks that are in your todo queue.

Or Just Learn More

  1. Is NGS for you? Take a look at intended use cases.
  2. Take a look at how NGS compares to other programming languages.
  3. Browse sample scripts to get some impression about the language.

Reddit: https://www.reddit.com/r/devops/comments/jm3cwe/bash_or_python_the_square_pegs_and_a_round_hole/

“But it works”

TL;DR – this is not nearly good enough in most cases and it’s only small fraction of what you are paid for.

I want this post to be the canonical place to refer people to who say “but it works” because people who explain why this is not OK are tired of repeating the same arguments, me included.

You are paid for …

Following is not an exhaustive list but it should give you some perspective which is opposite from the narrow-minded “but it works”.

Typically, your Software Engineering $Job pays you for:

  1. Of course the thing must work. But also..
  2. It should continue working
    1. Gives deprecation warnings? Probably not good.
    2. Only runs on Node.js v10 LTS which is end of life in less than a year (as of writing)? Think again.
    3. Got away with invalid XML? Can you be sure that the next version of parser won’t be stricter?
  3. It should be maintainable (aka you and other people should find it easy to operate and modify, now and years later)
    1. Code quality
    2. Tests (if you don’t have tests, even your basic claim that something “works” is under suspicion)
    3. Documentation
      1. How to use your sh*t?
      2. How to set up the development environment?
      3. Decisions
      4. Non-obvious code parts
  4. It should be production ready, not abstract “works” or even worse “works on my machine”
    1. Logs
    2. Metrics
    3. Tested in dev/qa/whatever-you-call it environment
    4. Reproducible – tomorrow they make a new environment, “qa42”, in a different AWS account in a different region. Could somebody else deploy your sh*t there without talking to you?
    5. Update 2020-09-13 (from Guy Egozy) – Scalable enough to be used in production.

If you claim that you are “done” because “it works”, congratulations, you have a (probably) working prototype. That’s typically small part of a project.


Related term – “Tactical Tornado”, look this up.


Update 2020-09-13: Reddit discussion

Python 3.8 Makes me Sad Again

Looking at some “exciting” features landing in Python 3.8, I’m still disappointed and frustrated by the language… like by quite a few other languages.

As an author of another programming language, I can’t stop thinking about how things “should have been done” from my perspective. I want to be explicit here. My perspective is biased towards correctness and “WTF are you doing?”. Therefore, take everything here with a appropriate amount of salt.

Yes, not talking about any “positive” changes here.

Assignment Expressions

There is new syntax := that assigns values to variables as part of a larger expression.

A fix which couldn’t be the best because of previous design decision.

“Somebody” ignored the wisdom of Lisp, which was “everything is an expression and evaluates to a value” (no statements vs expressions), and made assignment a statement in Python years ago. Now this can not be fixed in a straightforward manner. It must be another syntax. Two different syntaxes for almost the same thing which is = for assignment as a statement and := for expression assignment.

Positional-only Parameters

There is a new function parameter syntax / to indicate that some function parameters must be specified positionally and cannot be used as keyword arguments:

def f(a, b, /, c, d, *, e, f):
    print(a, b, c, d, e, f)

Trying to clean up a mess created by mixing positional and named parameters. Unfortunately I did not give it enough thought at the time and copied parameters handling behaviour from Python. Now NGS also has the same problem as Python had before 3.8. Hopefully, I will be able to fix it in some more elegant way than Python did.

LRU cache

functools.lru_cache() can now be used as a straight decorator rather than as a function returning a decorator. So both of these are now supported

OK. Bug fix. But … (functools.py)

    if isinstance(maxsize, int):
        # Negative maxsize is treated as 0
        if maxsize < 0:
            maxsize = 0

If you are setting LRU cache size to a negative number, it’s 99% by mistake. In NGS that would be an exception. That’s the approach that causes rm -rf $myfolder/ to remove / when myfolder is unset. Note that the maxsize code is not new but it’s still there in Python 3.8. I guess that is another mistake which can not be easily fixed now because that would break “working” code.

Collections

The _asdict() method for collections.namedtuple() now returns a dict instead of a collections.OrderedDict. This works because regular dicts have guaranteed ordering since Python 3.7

OK. Everybody had the mistake of making maps unordered: Perl, Ruby, Python.

  1. Ruby fixed that with the release of version 1.9 in 2008 (according to the post).
  2. Python fixed that with the release of version 3.7 in 2018 (which I take as 10 years of “f*ck you, the developer”).
  3. Perl keeps using unordered maps according to documentation.
  4. Same for Raku, again according to the documentation.

NGS had ordered maps from the start but that’s not a fair comparison because NGS project started in 2013, when the mistake was already understood.


How all that helps you, the reader? I encourage deeper thinking about the choice of programming languages that you use. From my perspective, all languages suck, while NGS aims to suck less than the rest for the intended use cases (tl;dr – for DevOps scripting).


Update 2020-08-16

Discussions:

  1. https://news.ycombinator.com/item?id=24176823
  2. https://lobste.rs/s/rgcgjz/python_3_8_makes_me_sad_again

Update 2020-08-17

It looks like the article above needs some clarification about my perspective: background, what I am doing and why.

TL;DR

The main points of the article are:

  1. Everything still sucks, including Python. By sucks I mean does not fit well with the tasks I need to do neither aligned with how I think about these tasks.
  2. I am trying to help the situation and the industry by developing my own programming language

Background about my Thinking

In general, I’m amazed with how bad the overall state of programming is. That includes:

  1. All programming languages that I know including my own NGS. This is aggravated by inability to fix anything properly for any language with substantial amount of code written in it because you will be breaking existing code. And if you do break, you get the shitstorm like with Python 3 or Perl 6 (Raku).
  2. Code quality of the programs written in all languages. Most of the code that I have seen is bad. Sometimes even in official examples.
  3. Quality of available materials, which are sometimes plainly wrong.
  4. Many of existing “Infrastructure as code” solutions, which in most cases follow the same path:
    1. Invent a DSL or use YAML.
    2. “figure out” later that it’s not powerful enough (by the way there is an elegant solution – a programming language, forgot the name)
    3. Create pretty ugly programming language on top of a DSL that was intended for data.

I am creating new programming language and a shell out of frustration with current situation, especially with bash and Python. Why these two? Because that’s what I was and still using to get my tasks done.

Are these languages bad? I don’t think it’s a question with any good answers. These languages don’t fit the tasks that I’m trying to do nor are aligned with how I think while being apparently one of the best choices available.

This Article Background

  1. Seen some post on RSS about new features in Python 3.8.
  2. Took a look.
  3. Yep, everything is still f*cked up.
  4. Wrote a post about it which was not meant to be “deep discussion about Python flaws”.

I was not planning to invest more time in this but here I am trying to clarify.

And your Language is Better? Really?

Let’s clarify “better”. For me, it’s to suck less than the rest for the intended use cases.

author really does consider himself a superior language designer than the Python core-dev team

( From https://www.reddit.com/r/Python/comments/iartgp/python_38_makes_me_sad_again/ )

I consider myself in much easier circumstances:

  1. No substantial amount of code is written in NGS yet.
  2. I’m starting later and therefore have the advantage of looking at more languages, avoiding bad parts, copying (with adaptation) the good parts.
  3. NGS targets a niche, it’s not intended to be general purpose language. Choices are clearer and easier when you target a niche.
  4. The language that I’m creating is almost by definition is more aligned with how I think. Hoping that people out there will benefit from using NGS if it is more aligned with how they think too.
  5. See also my Creating a language is easier now (2016) post.

Will I be able to make a “better” language?

From technical perspective, that’s probable: I am a skilled programmer in several languages and I have languages to look at more than everybody else had before. My disadvantage is not much experience in language design. I’m trying to offset that with thinking hard (about the language, the essence of what is being expressed, common patterns, etc), looking at other languages and experimenting.

From marketing perspective, I need to learn a lot. I am aware that “technically better” doesn’t matter as much as I would like to. Without community and users that would be a failed project.

Also don’t forget luck which I might or might not have.

What if NGS fails?

I think that the situation today is unbearable. I’m trying to fix it. I feel like I have to, despite the odds. I hope that even if NGS fails to move the industry forward it would be useful to somebody who will attempt that later.

NGS Unique Features – the_one()

I’ve spotted the following common data access patterns.

Get the Only Element in an Array

You need to get the only element of an array as in

my_val = my_arr[0]

Additionally you want to express the assumption that there should be exactly one element in the array. In NGS it’s simple:

my_val = my_arr.the_one()

the_one() will return the only element or will throw an exception if there is not exactly one element in the given array.

Get the Only Matching Element in an Array

You have an array. Only one element should satisfy some condition. You want to access that element. Again, there is a straightforward way to express this in NGS:

my_instance = instances.the_one({"InstanceId": my_id})

The use of the_one(...) here is again about the assumption that there should be exactly one instance with the given instance id in the instances array. Exception will be thrown by the_one(...) if that is not the case.

Update 2020-08-08: As pointed out, the method is not unique to NGS.

Update 2021-02-14: FHIRPath also has somewhat similar function: single().


Happy coding and have a nice week!

Everybody does X, so should we.

Do you even logic?

First, we can get rid of “everybody” here because chances are that if you look carefully… it’s not everybody. Nice rhetoric though.

Second, the argument itself is invalid. The latter does not follow from the former.

A person expresses disbelief, because of logical fallacy.
Do you even logic?

Correct approach

Which problem are you solving?

Stop here and think before continuing. The more correct you define the problem the better are your chances of solving what you really need to be solving.

What’s the best solution to your problem?

When looking into alternative solutions, consider your circumstances: budget, people, knowledge, time frames, integration with existing products in use, etc.

In case that X is one of the alternatives to your problem, if “everybody” does X, there might be better documentation, resources, people available that know how to do X. That’s why you might want to consider X favourably.


Related post: Prove your tool is the right choice.

Hope this helps!