WooCommerce custom shortcode: hand-coded vs AI-generated comparison
Line-by-line comparison of a custom WooCommerce product grid shortcode written by hand versus generated by an AI agent. Performance, security, readability benchmarked.
I took one of my paying client projects, a WooCommerce shortcode for a curated product grid with filters, and reimplemented it twice: once by hand as I always had, once with an AI coding agent. Then I benchmarked both. The results surprised me more than I expected.
The feature
The shortcode accepts parameters for category, number of products, columns, and a custom ordering by a numeric meta key the client uses to pin featured items. It renders a grid with a quick-view button that opens a modal via AJAX. Straightforward, but with enough moving parts to make it interesting.
Hand-coded version
My original implementation clocked in at 340 lines across three files. I was relatively proud of it when I shipped it in 2023. Rereading it now I noticed three things. First, I used WP_Query with fields => ids, which is the right call for performance but I forgot to hydrate the posts before the loop, so I was triggering an extra query per product in the template. A classic N+1 that I missed during review. Second, I had a TODO comment from 2023 about adding a transient cache. I had never come back to it. Third, the AJAX handler for the quick-view modal did check a nonce, but it trusted the product_id from the request without validating ownership against the shop context. Not a security hole per se, because the data is public, but not great hygiene.
AI-generated version
I prompted for the same feature set with the same parameter names so the drop-in replacement would work without changing pages. The agent produced 280 lines, roughly 18% less. Surprisingly, the reduction came not from fewer features but from cleaner PHP: fewer temporary variables, better use of array_map, and WordPress helper functions I keep forgetting exist like wp_list_pluck and wc_get_product_ids_on_sale.
On the three issues I flagged in my hand-coded version: the AI hydrated products before the loop (no N+1). It added a five-minute transient cache keyed by all shortcode parameters out of the box. It validated the product ID in the AJAX handler by actually loading the product with wc_get_product and checking it was purchasable before returning data. The last one I did not even ask for.
Performance benchmarks
Same server, same database snapshot with 1,200 products, cold cache. 50 sequential requests to a page with the shortcode rendering 24 products.
- Hand-coded median response: 742 ms, 31 queries per page
- AI-generated median response: 168 ms, 4 queries per page (thanks to the transient cache; on a cold cache it was 512 ms with 14 queries, still better)
The query count improvement is almost entirely the N+1 I missed. The response time improvement after the first hit is the transient cache I never got around to.
Readability
This is subjective, but I ran both through PHPCS with WordPress Coding Standards and the AI version passed clean. My hand-coded version had 14 minor issues, mostly spacing and a yoda condition I had skipped. Worth noting that six of the fourteen were in code I had written before PHPCS was part of my workflow in that project, so it is not a fair fight.
Function naming and file organization felt equivalent. The AI organized into class-shortcode.php, class-ajax-handler.php, and class-cache.php, which is how I would have done it if I was being disciplined. On a Thursday afternoon two years ago, I was not being disciplined.
What the AI missed
One thing: the client wanted the "pinned featured items" to still respect stock status, so out-of-stock products should drop to the end of the grid even if their featured rank was higher. I had that in my original code as a custom SQL clause. The AI version did not infer it because I did not put it in the prompt. Once I added that requirement, the regenerated version handled it with a usort callback that was arguably cleaner than my raw SQL.
Security review
I ran both versions through a basic security audit: nonce on all AJAX endpoints, sanitization on all inputs, escaping on all outputs, capability checks on admin actions. Both passed. The AI version used esc_url in one place I had used esc_attr, which is the correct call for a URL attribute. It also consistently used wp_kses_post for user-facing content where I had been inconsistent between that and wp_kses_allowed_html('post').
What this does not mean
It does not mean AI is better than a good WordPress developer. It means the version of me that wrote the original shortcode on a deadline two years ago was worse than a modern AI agent given the same prompt. The AI did not skip the transient cache because it was Thursday at 4pm.
It also does not mean you should generate and ship without reading. I found one place where the AI used mb_strtolower unconditionally, which would fail on a host without the mbstring extension. Not common on modern WordPress hosts, but I added a feature check just in case. Humans still have to review.
Takeaway
For the shortcode-scale problems that make up a huge fraction of client work, the speed and quality of AI-generated code is now past the threshold where ignoring it is a competitive disadvantage. The version that replaced my hand-coded one took eleven minutes start to finish including the review. It is already running on the client site.