MacLochlainns Weblog

Michael McLaughlin's Technical Blog

Site Admin

Archive for the ‘Software Engineering’ tag

macOS 2021+ DirectX

without comments

Obsolescence is always a factor with the macOS. After all, it does keep the revenue flow stable in hardware sales. Last October, Apple released macOS Monterey (12). It gets problematic for me when helping my students setup MySQL on their Apple computers. Some students come with very old machines. Take for example my wonderfully maintained MacBook Pro (mid 2014), which became obsolete with the terminal release of macOS Big Sur (11.6.2). When I did finally upgrade to that terminal release, I found my VMware (11.5.*) failed to load with a “Not enough physical memory is available …” error dialog.

The pretty Apple Dialog box is quite misleading unless you place it in context of a VMware application problem that occurs when you upgrade from one to another version of the macOS. You might go down any of three provided generalized rat holes, like paring running applications or the other nonsense on the Apple Discussion Board. While these three suggestions are useful when you’re macOS is truly running short of memory, it appears the same error can simply mean the Application isn’t supported on the new macOS release. It strikes me that this error message may be linked to a VMware virtualization issue with Hyper-V memory management with a combination of VMware (11.5.*) and macOs Big Sur (11.6).

For that reason, you can’t believe discussion threads unreservedly whether they’re from Apple or other vendors because they’re targeted to a universal context. Often users are looking for a specific fix, which means an answer to a specific use-case or problem context. The VMware Fusion 11 Release Notes clearly state that it only supports macOS Mojave (10.14) and macOS Catalina (10.15), which narrows the context, or use-case, for the error.

The error message, in this context, is most likely raised because the product is incompatible with how VMware Fusion manages memory at some level in the macOS Big Sur (11.6.*) version. As I speculated earlier, the out of memory error may be linked to how VMware uses Hyper-V but that’s a shot in the dark (or a random guess not the Peter Sellers movie of the same name that reprises his role as Inspector Jacques Clouseau).

How VMware works on the macOS is important to my students because we give them two alternatives for setting up MySQL on osMac: one is inside a Linux VM and the other uses Docker (my notes from January this year for creating a Docker instance on the macOS). My students reported errors like this earlier in the year and I suggested they upgrade to VMware Fusion 12. It seemed to work for everybody but now I can report the exact error message and verify the fix with a qualified reason.

While I’m on this topic, it’s probably best to deal with DirectX support on Apple Hardware. The Apple hardware requirements for support DirectX 11 3D Acceleration in a virtual machine is currently:

  • Mac Pro 2013 and later
  • iMac 27-inch 2014 and later
  • MacBook Pro 13-inch 2015 and later
  • MacBook Pro 15-inch 2015 with dual graphics and later
  • MacBook Air 2015 and later
  • MacBook 2015 and later
  • iMac 21-inch 2015 and later
  • iMac Pro 2017 and later
  • MacMini 2018 and later

It looks like everybody must upgrade any older Apple machines because we can probably assume most 2015 hardware will become obsolete with the new macOS in October 2022. If it’s not in your budget, you should plan for that cost now.

Fortunately, I also have a MacBook Pro (Mid 2021), the last of the Intel 9 chip models. I bought it the week before the announcement of the new tech. A little birdie told me it would be only the M1 after the announcement. The newer MacBook Pro is awesome and the 16″ screen is better than the older 15″ screen. I just hate the lack of a magnetic power cord. Alas, that’s the price of ensuring I had an Intel chip set.

As always, I hope this helps those looking for an answer.

Written by maclochlainn

December 24th, 2021 at 4:53 pm

Posted in Apple,Linux,Mac,macOS

Tagged with

Why Stored Programs?

with 2 comments

Why should you use stored programs? Great question, here’s my little insight into a situation that I heard about in a large organization.

A very large organization is having a technology argument. In someway, like politics, half-truth drives this type of discussion. This company has hundreds of databases and they’re about half SQL Server and Oracle. The argument (half-truth) states that using T-SQL or PL/SQL yields “spaghetti” code!

It seems like an old argument from my perspective. After all, I’ve been working with T-SQL and PL/SQL for a long time. Spaghetti code exists in every language when unskilled programmers solve problems but the point here is one of software architecture, and an attempt to malign stored programming in general. Let’s examine the merit of the argument against stored programs.

First of all, the argument against stored programs is simply not true. SQL DML statements, like the INSERT, UPDATE, and DELETE statements should maintain ACID compliant interactions with a single table in a database. Unfortunately, the same statements create anomalies (errors) in a poorly designed database.

Stored programs provide the ability to perform ACID compliant interactions across a series of tables in a database. They may also hide database design errors and protect the data from corruption. The same can’t be said for Java or C# developers. Java and C# developers frequently fail to see database design errors or they overlook them as inconsequential. This type of behavior results in corrupt data.

It typically raises cost, errors, and overall application complexity when key logic migrates outside the database. If you’re asking why, that’s great. Here are my thoughts on why:

  1. Making a Java or C# programmer responsible for managing the transaction scope across multiple tables in a database is not trivial. It requires a Java programmer that truly has mastered SQL. As a rule, it means a programmer writes many more lines of logic in their code because they don’t understand how to use SQL. It often eliminates joins from being performed in the database where they would considerably outperform external language operations.
  2. Identifying bottlenecks and poor usage of data becomes much more complex for DBAs because small queries that avoid joins don’t appear problematic inside the database. DBAs don’t look at the execution or scope of transactions running outside of the database and you generally are left with anecdotal customer complaints about the inefficiency of the application. Therefore, you have diminished accountability.
  3. Developing a library of stored procedures (and functions) ensures the integrity of transaction management. It also provides a series of published interfaces to developers writing the application logic. The published interface provides a modular interface, and lets developers focus on delivering quality applications without worrying about the database design. It lowers costs and increases quality by focusing developers on their strengths rather than trying to make them generalists. That having been said, it should never mask a poorly designed database!
  4. Service level agreements are critical metrics in any organization because they compel efficiency. If you mix the logic of the database and the application layer together, you can’t hold the development team responsible for the interface or batch processing metrics because they’ll always “blame” the database. Likewise, you can’t hold the database team responsible for performance when their metrics will only show trivial DML statement processing. Moreover, the DBA team will always show you that it’s not their fault because they’ve got metrics!
  5. Removing transaction controls from the database server generally means you increase the analysis and design costs. That’s because few developers have deep understanding of a non-database programming language and the database. Likewise, input from DBAs is marginalized because the solution that makes sense is disallowed by design fiat. Systems designed in this type of disparate way often evolve into extremely awkward application models.

Interestingly, the effective use of T-SQL or PL/SQL often identifies, isolates, and manages issues in poorly designed database models. That’s because they focus on the integrity of transactions across tables and leverage native database features. They also act like CSS files, effectively avoiding the use of inline style or embedded SQL and transaction control statements.

Let’s face this fact; any person who writes something like “spaghetti” code in the original context is poorly informed. They’re typically trying to sidestep blame for an existing bad application design or drive a change of platform without cost justification.

My take on this argument is two fold. Technologists in the organization may want to dump what they have and play with something else; or business and IT management may want to sidestep the wrath of angry users by blaming their failure on technology instead of how they didn’t design, manage, or deliver it.

Oh, wait … isn’t that last paragraph the reason for the existence of pre-package software? 😉 Don’t hesitate to chime in, after all it’s just my off-the-cuff opinion.

Written by maclochlainn

October 6th, 2012 at 3:48 pm