• 1 Post
  • 23 Comments
Joined 11 months ago
cake
Cake day: January 3rd, 2024

help-circle
  • QT is a cross platform UI development framework, its goal is to look native to the platform it operates on. This video by a linux maintainer from 2014 explains its benefits over GTK, its a fun video and I don’t think the issues have really changed.

    Most GTK advocates will argue QT is developed by Trolltech and isn’t GPL licensed so could go closed source! This argument seems to ignore open source projects use the Open Source releases of QT and if Trolltech did close source then the last open source would be maintained (much like GTK).

    Personally I would avoid Flutter on the grounds its a Google owned library and Google have the attention span of a toddler.

    Not helping that assessment is Google let go of the Fuschia team (which Flutter was being developed for) and seems to have let go a lot of Flutter developers.

    Personally I hate web frontends as local applications. They integrate poorly on the desktop and often the JS engine has weird memory leaks




  • SpaceX are on track to launch 130 times this year. The industry competitors launch 6-12 times per year.

    I suspect the higher incident rate is related, since you will need to manufacture, roll out, etc… much more often.

    The article also talks about most the incidents being in booster recovery. Only 2 Space competitors do that,

    Blue Origins sub orbital booster always returned to launch site and at best managed monthly launch. This rocket hasn’t launched in more than a year.

    Rocket Lab perform ocean recovery but only launched 11 times last year and only recovered the booster twice.

    So its hard to really compare


  • When AMD launched Ryzen they deliberately offered way more I/O bandwidth than Intel.

    The first generation Ryzen CPU’s used RAM frequency that could cause performance issues if you used low frequency RAM. That got fixed in the 3000 series.

    There are a small number of Ryzen CPU’s which end with “3D,” it means they had 3D Cache memory and its supposed to add rediculous performance in certain situations. Phoronix runs tons of benchmarks on CPU and GPU.

    The only Intel instructions AMD haven’t implemented is AVX-512 and AVX-10. No one uses AVX-512 as Intel CPU’s get so hot they performance throttle so much its faster to not use the extension. AVX-10 is something new Intel released this year to get around that.

    AMD does support AVX2 which a lot of Audio/Video products do use.


  • I wouldn’t get massively excited.

    Python is a scripting language, its shines when you want to write a stand alone file which takes an input and performs a task. Scripting languages are great to learn as a first language and so python is wonderful for non developers.

    The issue you hit is the build management solutions for Python are kind of broken and these help support and encourage good development practice so a lot of Python projects end up a collection of scripts rather than a mature project. You can have good projects but…

    In raw benchmarks Java has 90% of the performance of C/C++, but in reality Java is more performant because developers get bogged down in memory management on C/C++ and they get more time to optomise in Java as a result. I’m not sure where Rust will come out to be honest.

    Python benchmarks at 50% the performance of Java, in reality I’ve found code ends up slightly worse because Python is procedural, library support and streaming is poorly supported.

    Take library support, Spring really rose to prominese because of ‘hibernate’ which was a way to abstract talking to different databases through objects, you could switch from PostgreSQL to Oracle through config. Spring data has dumbed this down so I define a plain old Java object and Spring will generate everything I need.

    Python expects you to hand craft SQL statements and every database extends SQL slightly differently, so i need to write SQL for every operation and manage/own it. So the win in being able to quickly read/write to a database (since you don’t have to learn anything about Spring) is quickly ruined because of the all the boilerplate and error handling you now have to write.




  • See its the opposite in Linux land.

    AMD open sourced their drivers so everything just works, while Nvidia drivers have to be built against your system and Nvidia refused to supply proper desktop drivers for years (EGLStreams vs GBM).

    The downside of AMD’s approach is it has to trickle down which depending on what distribution you use can take weeks to a year and it normally takes a couple iterations to get everything working nicely. Which basically expect the 6800 XT to work brilliantly but the 7300 to be flakey for a bit.

    My favourite bit is I owned a few Athlon 5300 APU and 5 years after they were released AMD were still adding performance improvements to them.


  • Immutable distributions won’t solve the problem.

    You have 3 types of testing unit (descrete part of code), integration (how a software piece works with others) and system testing (e.g. the software running in its environment). Modern software development has build chains to simplify testing all 3 levels.

    Debian’s change freeze effectively puts a known state of software through system testing. The downside its effecitvely ‘free play’ testing of the software so it requires a big pool of users and a lot of time to be effective. This means software in debian can use releases up to 3 years old.

    Something like Fedora relies on the test packs built into the open source software, the issue here is testing in open source world is really variable in quality. So somethinng like Fedora can pull down broken code that passes its tests and compiles.

    The immutable concept is about testing a core set of utilities so you can run the containers of software on top. You haven’t stopped the code in the containers being released with bugs or breaking changes you’ve just given yourself a means to back out of it. It’s a band aid to the actual problem.

    The solution is to look at core parts of the software stack and look to improve the test infrastructure, phoronix manages to run the latest Kernel’s on various types of hardware for benchmarking, why hasn’t the Linux foundation set up a computing hall to compile and run system level testing for staged changes?

    Similarly website’s are largely developed with all 3 levels of testing, using things like Jest/Mocha/etc… for Unit/Integration testing and Robots/Cypress/Selenium/Storybook/etc… for system testing. While GTK and KDE apps all have unit/integration tests where are the system level test frameworks?

    All this is kinda boring while ‘containers!’ is exciting new technology




  • Firstly it was just a bit of fun but from memory…

    Twitter was listed as having 2 data centers and a couple dozen satellite offices.

    I forgot the data center estimate, but most of those satelites were tiny. Google gave me the floor area for a couple and they were for 20-60 people (assuming a desk consumes 6m2 and dividing the office area by that).

    Assuming an IT department of 20 for such an office is rediculous but I was trying to overestimate.


  • The Silicon Valley companies massively over hired.

    Using twitter as an example, they used to publicly disclose every site and their entire tech stack.

    I have to write proposals and estimates and when Elon decided to axe half the company of 8000 I was curious…

    I assigned the biggest functional team I could (e.g. just create units of 10 and plan for 2 teams to compete on everything). I assumed a full 20 person IT department at every site, etc… Then I added 20% to my total and then 20% again for management.

    I came up with an organisation of ~1200, Twitter was at 8000.

    I had excluded content moderators and ad sellers because I had no experience in estimating that but it gives a idea of the problem.

    I think the idea was to deny competition people but in reality that kind of staff bloat will hurt the big companies


  • It does but for the 90’s/00’s a computer typically meant Windows.

    The ops staff would all be ‘Microsoft Certified Engineers’, the project managers had heard of Microsoft FuD about open source and every graduate would have been taught programming via Visual Studio.

    Then you have regulatory hurdles, for example in 2010 I was working on an ‘embedded’ platform on a first generation Intel Atom platform. Due to power constraints I suggested we use Linux. It worked brilliantly.

    Government regulations required anti virus from an approved list and an OS that had been accredited by a specific body.

    The only accredited OS’s were Windows and the approved Anti Viruses only supported Windows. Which is how I got to spend 3 months learning how to cut XP embedded down to nothing.



  • There will always be someone who is beating you in a metric (buying houses, having kids, promotions, pay, relationships, etc…) fixating on it will drive you mad.

    Instead you should compare your current status against where you were and appreciate how you are moving forward

    As for age

    During university my best mate was 27 who dropped out of his final year, grabbed a random job, then went to college to get a BTEC so they could start the degree.

    It was similar in my graduate intake, we had a 26 year old who had been a brickie for 5 years before getting a comp sci degree.

    The first person I line managed was a junior 15 years older than me, who had a completely different career stream. They had the house, kids, had managed big teams, etc… honestly I learnt tons from them.





  • I actually researched my list, most the technologies were used internally for years and either publically released after better public alternatives had been adopted or it seems buzz reached me years after Google’s first release. So I am wrong.

    Between 2012-2015 I used to consult on Apache Ivy projects (ideally moving them to Maven and purging the insanity people had written). As a result I would get called in when projects had dependency issues.

    The biggest culprits were Guava/GSon, projects would often choose to use them (because Google) and then would discover a bug that had been fixed in a later patch release (e.g. they used 2.2.1 and 2.2.2 had the fix). However the reason they used 2.2.1 was because a library they needed did. Bumping up the version usually caused things to break.

    The standard solution was to ask’why’ they needed Guava/GSon and everytime you would find they are usually some function found in one of the Apache Commons libraries. So I would pull down the commons library rewrite the bit (often they worked identically)

    Fun side note in 2016-2017 I got called to consult on a lot of Gradle projects to fix the same kind of convoluted bespoke things people did with Apache Ivy. Ivy knew the Gradle ‘feautres’ were a massive headache in 2012 and told you to use Maven for those reasons. Ce La vie.

    We tried using Protobuf in 2008 and it was worse than the Apache Axis for JSON conversion (which feels too harsh to say), similarly I had been using AMQP or Kafka for years and tried gRPC when it was released (google say 2016 but I am sure we tried in 2014) and it was worse in every metric I still don’t understand why it exists.

    I was using Vaadin in 2011 and honestly thought GWT was released in 2012. I had to use it in 2014 and the workflow, compile time and look of GWT is just worse than Vaadin.