I’ve using text files for decades
Thing is, when working on a Commodore 16, there really wasn’t a “text” file context. My programs were written on ASCII and that’s about it. Everything else was a “binary” file for me. I didn’t know that even my data files were simply CSV or what file types were available, much less the use of those. All the files I created were text files.
Then came DOS. My programs were still text files, but the spectrum grew to include other files created using different applications. I didn’t put so much attention at the contents as I were producing and reading the files using the same applications. Simple experiments of trying to load a file with a different application were a guess of possible success, aside from programs, that were interchangeable among different editors and IDEs. I assumed that everything had a proprietary binary format and stopped there.
There was a brief (and somewhat painful) time on which I produced files using Windows. The interface was taking care of what can be dropped where. But some files were “garbled” at read time, so I learn’t about different character sets by that time. I was losing control over what could be done in favor of giving that control to applications, and thus application publishers / owners. That was the time I learned about applications (and companies) deprecations. Some of my peers were collecting an ever increasing ammount of old applications (and even hardware readers, as those were evolving as well) so content wasn’t lost forever. Museums take care of that nowadays, sometimes with a lot of effort.
Then I learned about Linux. I was aware of UNIX, which I day-dreamed about having access to, and Linux was available (free) for x86. I went into buying The UNIX Operating System book and ingested it on a weekend, all the while downloading Linux using a 28.8K modem, all 90+ hard diskettes. I was eager to know how to use a UNIX-like OS before I was able to install it.
Everything is a file on UNIX
Not only UNIX was designed to handle (mostly) text files. But even hardware was structured as to take inputs and outputs as files. Pipes was a way to connect everything, and UNIX included bullet-proof binaries to handle all that was required in between.
Getting my DOS/WIN files into Linux was a breeze, just a copy and transforming EoL and EoF (sometimes). It was easy and fast. All my C programs were running just as well. The “knowledge” I had as text files was easily available using less, more, and vi. Never lost anything, except for those pesky binary files I produced using applications from the Win period. It was a mistake I wasn’t inclined to make again.
The design has hold exeedingly well over the years.
I downloaded the Linux Slackware distribution in 1995. I’ve seen my fair share of data portability issues with proprietary systems along the way. Writing middleware is lucrative after all.
Structured text
Yes. Everything ontinues to be a streeam of text. Encoding has evolved to include more and more bytes per character, but it is still text.
Data dictionaries evolved from CSV column names, but they point to the same data collections. If complex, or even changing, we can express most data in terms of delimited text, either with parenthesis, quotes, or trailing spaces. I expect those to keep evolving over time. And everything keeps being easily ported with a couple of lines of code or by simply using available binaries.
Even this blog is nothing but a text formatted standard, Mark Down.
AI and MD
Context is easily added as MD files for model consumption. Having experience with text files and being versed on how computers ingest information is a plus on getting up to speed on any new project, as creating and updating workflows as simple text files has never been more easy than today.
Getting a SOTA model to analyze flows of data and produce text files that describe a workflow and its details is a prompt. Then making a model to execute said workflow with new data is another prompt. Setting a goal on a workflow makes prompts unnecessary, and so on…