This is where I put all the things I've created which I hope may somehow be useful to others.
I believe in modular, purpose-built software. The more freely a program or library can be adapted and joined to other software the better. This of course begs the question of interoperability, documentation and cooperation amongst developers, and is why I am a sharp proponent of free software. I cannot stress how important it is, not simply just for myself (in that regard I find it to be an unparalleled educational tool), but rather for humanity as a whole. It is the actualization of creating software without constraints of intellectual ownership/micromanagement.
I have a strong interest in distributed software systems, especially favoring self-healing networks and distributed data structures and associated techniques. Beyond that I prefer to create purpose-built, practical use applications in a variety of languages. Practical in that their sole use isn't to pad a resume, but rather be of some use to others now and in the future beyond direct use even, either for design ideas or coding practices. Being able to riddle off a list of intangibles for the sole purpose of creating a list reeks of convenience, if nothing else. Above all else, I would hope what I have created can be of some use to people, even if only pedagogical.
In retrospect, I have since come to understand that I am a very design-driven developer, perhaps even over designing at times. A vast majority of my effort is prepaid out beforehand designing multiple approaches and ideas. Generally this pays off since perfect information about the task at hand rarely exists (so much for SCRUM), and I'd rather fall back on a secondary design than have to re-design again on the fly. Over-designing also helps the implementation in terms of knowing what you'll expect out of certain parts of the program. Despite modern languages offering an over-abundance of higher-order constructs, I honestly don't believe in their liberal use, ie. Php's magic methods. Most of the highly extrapolated capabilities of modern languages are more reliably implemented by means of lower level controls. Illustratively, one could implement an alternative, and more highly-tailored form of reflection in any language that doesn't support it by means of status accessor methods. Judgment calls will inescapably need to be made about any design decision and in some cases these constructs can be indispensable, however a rush to be highly reflexive/dynamic isn't always the best approach, as readability can suffer.
So, I just discovered that my 5400 rpm harddrive, a WD80EFAX is really a 7200 rmp drive, and more than that, all 3.5" drives above 6TB are, in fact 7200, all an entire 3 mfgs withstanding! Due to changing market factors, the only large drives manufactures can afford to create are the faster, hotter and louder, 7200 variants. It seems as though the only reason we were in fact able to purchase the quieter, cooler, and longer lasting ones in the past were due to economies of scale, that and the fact that everyone and their mother were purchasing them. Now that margins have collapsed with ssds around, they can only afford to make one type.
I wouldn't have been so dissapointed if it were only this, oh no rather, the blatant lying about this change in SMART (diagnostics deliberately shows incorrect rpm data), and the complete ommission and obfuscation in advertisments borders on racketeering. WD, of course has led the way, most prominently by its use of labelling "5400 rpm class" drives, while still packing a 7200 in the box. It took some intrepid and commendable researchers (arstechnica) acoustically microphoning said drives to prove they, indeed, spun at the un-advertised speed of 7200 rpm.
I myself, actually did realize something was amiss four years ago after I noticed a newly purchased drive was hot to the touch, when compared to others plugged in right next to it. In the same aparatus. After reviewing its SMART recently I discoverd it was a whole 12 degress celsius hotter than a comparable, true 5400 drive. Lovely. 49 vs 37C
More evidence of the downward spiral tech is plowing through. Be wary going forward, theres much margin for future prevarications and ommissions about anything and everything you intend to lay down money for.
regreSSHion
CVE-2024-6387
Heres the skinny on this summer-dog-days fun RCE.
OpenSSH Version History
• [START - 4.4p1) VULNERABLE
• [4.4p1 - 8.5p1) FIXED patched in CVE-2006-5051 and CVE-2008-4109
• [8.5p1 - 9.8p1) VULNERABLE (again)
However,
This is basically a nothingburger.
The bug was re-introduced in ~2022-ish in Openssh version 8.5p1. It has a POC against 32bit systems, running this daemon.
Meanwhile 64 bit systems are unaffected due to layout space randomization, yadda....
--> In the year of our lord 2024 very few systems, if any, are on 32 bit x86 with Openssh. <--
I just checked my fleet as well. The only likely 32-bit candidates are ARM/MIPS for routers, and guess what they run, dropbear.
I mean, check yours, but mine were OpenWRT and DietPI, both using dropbear.
Armbian does use OpenSSH though the variants i have are all 64, not saying its pure, but rather anecdotal.
Realistically,
Raspbian is the only common SOC distro which, if it was deployed with 32 bit could be problematic.
Again, you'd have to have deployed it in the last year or two AND not have the 64 bit version.
Its thankfully not a situation I found myself in but from what I read, the 32 is still a widely used variant, if not THE variant.
The above is the most common candidate for a vulnerable device.
In x64 land, typical systems just dont fall into the 32bit category at this stage in the game, those systems are outliers.
However, definitely update your herd, even if you are completely 64bit, but there is a case to not kneejerk since there is a huge,
non-vulnerable version gap (4.4-8.5) which is quite fine to leave in service(Terrapin Attack CVE-2023-48795 not withstanding), and while the last few years have been vulnerable, if
you haven't deployed many Raspberry-PI like systems in that window you're also not most likely at risk.
# query your daemon
$ nc -v -v <host> <sshport>
# what system your're running
$ uname -m
Alas, for some time now I was looking for a sign. A harbringer of things to come. Something, at least tangentially related to the sad phenomenon I am witness to recently of the growing indignation surrounding C-suite management and engineering. I totally understand it. There are things that get under people's skin. The first being that we all want returns on our investments, or rather talents. It is this last bit which is causing inter-class grief in the twenty-first century. For years, tech has led, engineering and developers no less. With management a short armslength behind, dare I say even in the most conservative of companies. Things were new, and management was obviously in the dark, admitting it nontheless. Ossification has had its course though, especially as of late, and the overconfidence of a new generation has, I think, led them into a new era of blind hubris. Although they are aiming for survival, the reality is that unnecessary complexity and homogenization has surged in the latest years, by leaps and bounds as a result. Now more than ever, wisdom is necessary, and the necessity of cooperation between technical experts and big-picture thinkers is needed. Without it more events like yesterday will occur.
The main issues with yesterday's occurrence are two fold. The first being that Microsoft, and CrowdStrike, are both technically held to blame. Microsoft is, for having a facility in its OS, be it kernel-mode drivers inserts or whatever the case which allowed a terminal condition during os bootstrapping. This is a tough call though, since if you let 'helpers' swing the hammer during the construction of your house, you can't, rightly be surprised if SOMETHING comes out crooked, or worst still, if it collapses. This is the unwritten agreement problem between MS and its kernel developers. Its been this way since forever, and I understand it affects any 2-party situation when tech+policy come together. What I'm very dissapointed with is how a company who can find the time to endlessly refresh the Windows desktop and peer into our telemetry like never before with AI and other time consuming projects, can NOT at the very least have some sort of auto driver rollback feature if it detects a new driver has crashed the system repeatedly on boot up. Again, there just must've not been enough time in the day for that project. I would've gladly taken it up.
Crowdstrike, on the other side is, without a doubt the main shoulderer of any blame here. They. Obviously. Didn't. Test. It. Theres not much more to say. On the ethicality of thier systems, which are arguably spyware, one can't really hold them to task as they are simply filling a market 'need'. Whether the need be ethical, which it is not, is irrelevant. There is, in fact, an entire ecosystem surrounding systems such as theese which I've had to personally 'deal' with while working for several companies in the past. They are, without a doubt, almost always a scourge and don't serve the empolyee in any way or form. However they are a 'requirement' by our last group, management, who through thier ignorance and hubris have ever-growingly allowed the scope of theese systems to creep into every facet of modern machinery.
Most strikingly this last part, which stands out from the technical failings of the first two culprits, is actually a failure of those in power to see the shortcommings of that which they dictate, most specifically because they inherently lack the ability to understand the failure modes. Windows, should not be running on forklifts, fullstop. I don't care what your ideas are. The opening-up of huge classes of unrelated medical machinery, heavy equipment, airport kiosks, and the like to the desktop operating systems of today, especially be it WINDOWS is unnaceptable, even if it be Windows Embedded. They should not be on such a system requiring irrecoverable update cycles of this frequency. Bewilderment was the takeaway from alot of engineering-minded people as well when they learned of this crucial facet. I can recall in the Deepwater Horizon film, seeing WinXP on the drilling machines. Reality is very much this, sadly, and more. It seems without competent engineers who have the authority to object, familiar systems are seen as 'acceptable' when used to control things many orders of magnatude simpler. The negatives, like an expanded attack surface, and tie-ins to corporate spyware agent platforms aren't even considered. There needs to be a third option. An extensible, purpose built, reskinnable, and dare I say minimalist OS for machine control, which ISN'T directly internet connected and beholden to constant keystone updates which could and do bring down the entire fleet. Remember, diversity is our strength right? But this of course requires more muscle and authority from engineering, and certainly more competent boots on the ground. The last of which is not going to happen in our current ecosystem and corporate mindset where one-size-fits-all-control-and-monitor-everything is the mantra. Theese software platforms are choosen by people who have little-to-zero actual understanding of what hazards they impart on systems and are only fed what compliance checkboxes they can now tick once they sign up thier organization. Yet another security theatre of a different kind.
For my future predictions, expect more of the same, in addition to the slow circling of the drain that technology has become even when it IS working. Where nuance would flourish, expect commoditization, homogenization, and aggregation to dominate like the gorilla they are. Responsibility, oh thats in there too, or at least to remind us of what we don't get, any responsibility to strive for more appropriate solutions. None at all.
Keywords: Apache2, Upload Limit, Request Limit, 1 GB, Lua, idealism.
I have been using apache since forever. So I tend to take things the wrong way when it gets mucked-with.
So, on a recent ugrade bender while changing some decade-old systems from the 3. kernel to something recent, and leaving ubunster to come back to debian all over again, I realized something odd.
There was definatley something blocking me from POSTing more than 1024MB files or even multiple smaller ones adding up to over that limit! This was to controlled code, code which I had worked out all kinks time and time again, and I knew exactly what needed to be adjusted in php.ini. No this was elsewhere, it wasn't even getting to php, and was in some browsers manifesting as a "413 request entity too large".
The usual gamut of sites and know-it-alls proved pretty much useless. Stuck in the past they were, perpetually hashing out the memory_limit and post_max_size existentialisms. Quickly I started the process of elimination, taking into account working systems with almost-as-new setups. It must, I deduced, be something 'Changed' in the last year or two and definately apache based. A flashback quickly befel me. Ahh right! Just like when PHP 8 decided (itself) to implicitly set the default timezone to UTC (and not use the system time)! It was a subtle but not TOO difficult change to notice. This one must be similarly situated. A long-standing apache2.conf setting that's never needed to be used has now had its default setting altered to 1 GiB, implicitly. But why?
🎡🌴🗿 CVE-2022-29404 🗿🌴🎡
So IF you're using LUA on the backend via Apache mod_lua and a malicious script...I'll let the official explanation stand:
In Apache HTTP Server 2.4.53 and earlier, a malicious request to a lua script that calls r:parsebody(0) may cause a denial of service due to no default limit on possible input size.
So to recap, to fix the issue, which can of course be mitigated by not using mod_lua as stated elsewhere or possibly implementing a script check manually which would parse potential user-generated scripts to look for the rparsebody(0) or perhaps rewriting the implemntation of rparsebody() to not accept unbounded inputs, instead ALL installations of the webserver WHETHER OR NOT THEY ARE USING LUA, must now have a decades-old convention changed (WITH NARY A NOTICE IN SUPPLIED apache2.conf FILES!) to a value which induces behavior which is VERY timeconsuming to track down given the multitude of competing settings in php or other backend solutions which mimmic the same situation, while at the same time affording the admin ZERO error log notices in a vanilla log installation that the situation is occurring. Oh, the fix could also have just been an alternative "LimitRequestBody" directive in mod_lua's conf which it could have been required to use. The possibilities are endless. What shouldn't have happened though is what did happen.
The REAL issue is how this fix is somehow considered an ideal resolution to the initial CVE at hand. Fixes should always address the problem as close to the original cause as possible. Something in a mod_lua config would have been much more appropriate to those of us luddites who don't use the module.
The industry is changing. Right now, before our very eyes, interesting things are taking hold. No one can truly say where things may end up, and not eveything will be for the better. At least for the moment though, before the storm, it gives us pause to look around and remember the amazing things which have come to pass and have died, are dying, and have helped us get to where we are now.
In loving memory of the free ESXi vSphere Hypervisor
...and for that matter VMWare...
https://news.ycombinator.com/item?id=39359534
I really hate the term AI, especially when used in its contemporary setting. At the very best what we see are methods of computational models, statistical combinatronic synthesis, and particular forms of a stochastic parrot at work, be it visual art or text. Eventually, soon to be permeated thoughout the entirety of the range of known human outputs.
Whether it is at all good or not is entirely up to your frame of mind. Two distinct personality type pools come to mind, and I posit that the predominantly sensing to prefer the new x-model creation engines, and likewise the intuitional will shun it in favor of more "genuine" forms of human invention/inspiration.
What lies ahead however is the headlong intersection of this new tech with society and particularly its conflict with existing laws, most specifically copyright.
One such fight is being hashed out via comments on copyright.gov. It is there I of course submitted my two cents.
https://copyright.gov/policy/artificial-intelligence/
I think the question of Copyright and AI has hit a crossroads. Obviously there have been problems with copyright for decades but with the trivialization of the 'creative' process which have been brought to light recently via the use of generative processes to recombine, ad infinitum, various elements much like random notes in a song, we have been left with an introspection into what it takes to create, and it is much less than satisfying. We want to believe in inspiration and I truly believe there is such a concept. There is however at least some fraction of every work, which is taken in at least some trans formative way from past works and experiences. This is inescapably so, and will always be part of the human experience. This however, is not in itself an argument for a conditional form of copyright, one which would otherwise rest on needing to provide ones sources. Since there will always be some form of adaptive plagiarism, we must always look as to whether the new work is sufficiently distinct and not merely a ripoff of some work prior.
The main aspect of what copyright must accomplish is based then on what it at least aims to achieve, that is, an incentive to innovate. Its this incentive which even though at times won't be fully achieved and may not even hold so honestly, still it must persist as the goal, which in turn will be rewarded with the protections of copyright, at least for a time.
Looking at how AI or at the very least generative processes work and recombine elements, and then listen to the derivative technologies, one can only be left with bewilderment. At the very least for a huge range of the processes underlying it few can say with absolute certainty what the aim of the machine was at a macroscopic level. The technology is extremely hard to gauge at the atomic level too, especially so when at scales seen presently. So then what are we to be left with, trying to apply a reward to such a system. How can it value such incentives? At the very least one can expect someone or some group pushing the buttons to reap the rewards of said works. This furthers the strong at the expense of the weak and their government, to crank out marginally unique ideas, a perversion of the system. In the end, sadly, not much can be done to prevent this angle. As groups can simply claim their ideas were self-inspired. It should not however be considered fruitless to enforce, that no conspiratorially generative based system can "claim" copyright over randomly seeded works, in spite of this. Rights should only be awarded to individuals/groups made of individuals, on the grounds that it can inspire and entice further works to be generated.
The second angle of AI with respect to pre-existant copyrighted works regards to their training. I see no reason such a system cannot and shouldn't be allowed to operate. Firstly because in an of itself there doesn't pose any immediate attack on existing works and the copyright system, granted my first angle holds. This again being that generative or non-human systems cannot be used to yield newly copyrighted works. In essence this aspect is taken care of granted the first holds. Unfortunately the future is extremely messy and will be riddled with exceptions of groups trying to subvert both issues, and in bad faith.
Back on 25 Sept I submitted comments for the FCC's Cybersecurity Labeling for Internet of Things.
https://www.fcc.gov/ecfs/search/docket-detail/23-239
IOT security has become an issue of concern primarily becuase of scale.
A single device isn't even noisy if compromised, but millions definately are a problem.
From this, its in all of our interests to hold companies to task with regards to whether or not their boxes are potentially contributing to destructive botnets at large.
Since I am a professional and hobbyist developer I always lean on Open Source as not only an immeasureable source of knowledge but also utility for the end user.
I don't see how a forced opensource approach is necessarily the best solution for proprietary market forces at the getgo. However, when balancing an army of abandoned unpatched and proprietary product(s) against one which has been opensourced at abandonment, it very much seems like if some device has been deemed EOL, it should certainly be opensourced so that independant developers would have a shot at salvage lest something catastrophic be found.
The only final question then is how long each period should last before a manufacturer is required to relinquish source if they forgo any further updates. This gets all too compilcated when a product is stagnant for many years, and faces no known vulnerability which needs patching.
I think at present, however, I'm at a loss for concrete numbers as product categories span huge functional and practical realms. Some things need to be regularly replaced, while others less so. Minimally, the update period should be yearly, and then after 3 years without updates they must yield source. While being strict, as making said sources publicly consumable leads to extra work during the initial product outline, the complementing externality is that allowing your device to be ubiquitously attached to the internet en masse must have at least some level of responsibility. Just as oil producers must face that their customers pollute by propelling themselves forward and leaving something behind.
I want to do something a little bit different. Never before have I mulled over a situation for so long, incurring so many rewrites/rehashes as I have over this topic, namely IPv6's rollout and how SLAAC misses the boat on a crucial use case. I'm going to use the opportunity to nevertheless state it as quickly as possible, subsequently filling in info later on. The situation is basically that SLAAC misses how network edge operators favor address ranges which are static, singly-homed and centrally controlled MORE than they favor that said endpoints be publicly reachable. That is it, the most succinct, terse forumation for the problem with IPv6 migration, a migration which is by no means endeared though it should be. Its something that we should all be jumping on but rather due to the fact that SLAAC, the uncle that no one asked to come along for the migration, is here, and the options and RFCs are numerous, stagnation ensues. Its why ultimately DHCPv6+NATv6 will eventually come to the rescue as it is the only way to address theese network operator's use cases. One cannot ensure network space will never change (at the edge) for machines which are not on ARIN allocations. This couldn't be more true for last-mile ISP allocated ranges, and if any annecdotes are to be believed, end users will routinely see thier prefixes (and thusly thier entire network) change just as often as thier IPv4 singletons, because despite ISPs not needing to do so they have no financial obligation to ensure it remain static. Its as if one of the cornerstones of IP networking (which has materialized behind NAT for decades) was thrown away, as if nobobdy needed it. But hey, there are now way more public addresses.
So here we are at the juncture, something which we've grown near and dear to has run its course, and the replacement will exceed our expectations in terms of expandability. Yet, part of the way its being offered (by default) misses the exact way in which we've come to depend on our network to operate. Oh, and what's this? No NAT needed you say? Sounds too good to be true. Well, we'd like our addresses to stay put thank you very much. No, multihoming is not a reasonable solution, yes we're aware of RFC6724, and we'll be giddy someday when THAT'll introduce the next security event! The single greatest way to minimize unanticipated network interactions is via a SINGLE, unchanging address. Despite the work put into said RFC, which will make its way to getaddrinfo(), you can't beat an algorithm which can only choose ONE source address. Its pretty hard to fail there. In the meantime, source address selection will have no less than 8 rules which must be assesed before connection initialization. Again, its not that theese are inherently incapable of delivering deterministic results, rather its from the viewpoint of an experienced developer who has seen the myriad ways in which large systems grow over time, caches are introduced, best practices are skirted, even publicly well-known ways of initializing something are ignored. Its hard for me to trust such moons won't align to cause interesting and totally unforseen failure modes which wouldn't even be considered on a singly-homed system. This isn't even taking into account the ways I've seen IPv4 multihoming act up, albeit without such logic as described in the RFC.
Now, I can already hear the retorts as to what people should expect to happen when you don't read the docs, or that no one is stopping us from using DHCPv6+NATv6 for the singly-homed approach. The main problem I have is that in a sea of vendors and interaction points, some may and even have choosen to implement IPv6-SLAAC and call it done. THIS is the problem, and IT is the largest impediment which hinders moving forward. The standard must contain both methods and furthermore if need arises to only allow one I'd opt for the one which is most functioanlly similar to what we have already, that being DHCPv6+NATv6.
Pushing SLAAC as the only way forward (sans NAT) for IPv6 is, in effect shortsighted. It is illustrative of a humanistic tendancy wherein anything run by humans which becomes more widely proliferated incurrs schisms when common elements are endeared by subgroups for different reasons. Things which we had no idea our neighbor loved about our commonalities cause strife when one group decides to change said elements, and so too with technology and protocols. The severe lack of advocacy for edge network admins will only be detrimental in the end and as it stands even at the time of this writing it is just those kinds of networks which adoption lags on.