Blog Post

The Human Connection Blog
5 MIN READ

From XSS and SQLi to AI-generated code and supply-chain compromise: How application security is evolving

SabrinaKayaci's avatar
20 hours ago

Keeping up with vulnerabilities is like playing a never-ending game of whack-a-mole. One day, we were knee-deep in XSS payloads and buffer overflows; the next, developers everywhere were plugging SQL injection holes with duct tape and regex.
A lot of our earlier tech wasn’t built with security in mind – or at least not at the forefront of our minds. But over the past two decades, the culture has changed, and developers are shifting left. Programming languages, as well as the frameworks and ecosystems around them, are evolving and adapting to evade security threats.

XSS: From everyday headache to “mostly handled”

Remember when cross-site scripting (XSS) was every web developer’s nightmare? In the 2000s, it seemed like every other website was vulnerable. If you were lucky, your users’ only punishment was an annoying pop-up. If not, credentials and session cookies were up for grabs.

Modern languages and frameworks took proactive steps in their designs. Today, many of them have built-in protections against common vulnerabilities like XSS. Some examples are:

  • React, Angular, Vue: Automatically escape output by default. You have to go out of your way to render raw HTML (and they make you feel guilty about it).
  • Django, Ruby on Rails, ASP.NET Core: Templates escape user input by default.
  • Browsers: Even they pitch in, with features like Content Security Policy (CSP).

It’s still possible to write a vulnerable application, but the bar is higher. As a result, attackers have a bigger hurdle to jump over. This is thanks to the evolution of how languages and frameworks approach security – not as a feature, but as the default.

SQL injection: OR 1=1 is mostly history

SQL injection once powered major data breaches. Now, parameterized queries have become the norm, and we’re on our way to leaving those days behind. 

  • Object-Relational Mappers (ORMs) like SQLAlchemy, Entity Framework, and Hibernate generate safe SQL.
  • Most modern languages make string concatenation in queries unnecessary (and uncool).
  • Even PHP, once infamous for its “raw SQL everywhere” approach, now encourages prepared statements and offers safe database APIs.

Not a perfect world, but much improved. It’s crucial to keep in mind, however, that technology shouldn’t be relied upon uncritically to keep applications secure. Developers must do their due diligence.

Memory mischief: The rise of Rust (and memory-managed languages)

C and C++ are legendary for performance. And legendary for buffer overflows, use-after-free, and all manner of memory mischief. Enter memory-safe languages:

  • Java, C#, Python: Garbage collection and managed memory eliminate entire classes of bugs.
  • Rust: Takes it up a notch with ownership semantics, preventing data races and dangling pointers at compile time.

A lot of system-level work has migrated to these “safer” languages, and the impact is apparent in everything from embedded devices to operating systems (hello, Rust in the Linux kernel).

The new frontiers: AI and supply chain attacks

Just as one threat starts to become old news, new threats emerge. Two of these are:

Supply chain compromise

The SolarWinds breach and NPM “event-stream” incident have put supply chain attacks in the spotlight over the last decade. It goes to highlight that you can write perfect code and still get breached because a dependency many levels deep was compromised. 

Some of the changes we’re seeing as a result:

  • Package registries are adopting MFA and signing requirements.
  • Software Bill of Materials (SBOM) is becoming a must-have, especially in regulated industries.

But the battle is ongoing. If anything, our code is more interconnected than ever, and the rise of “vibe coding” is complicating matters further.

AI-generated code

AI tools like GitHub Copilot and ChatGPT are generating millions of code snippets. This is a double-edged sword. You get increased productivity, but the code is often vulnerable. This is partly a reflection of the insecurities in the code that the models were trained on.

I asked ChatGPT 4.5 to identify the vulnerability in the following code and it couldn't. Can you? Leave a reply with your thoughts!

@limits(calls=6, period=60)
@app.route("/change-password", methods=["POST"])
def change_password():
    user = session.get("user")
    if not user:
        return jsonify({"message": "Unauthorised"}), 401
    data = request.get_json()
    password = data.get("new_password")
    if not password:
        return jsonify({"message": "Password is required"}), 400
    if not db.change_password(user, password):
        return jsonify({"message": "Failed to change password"}), 500
    return jsonify({"message": "Password changed successfully"}), 200


@limits(calls=6, period=60)
@app.route("/login", methods=["GET", "POST"])
def login():
    data = request.get_json()
    user = data.get("user")
    password = data.get("password")
    if not user or not password:
        return jsonify({"message": "Username and password are required"}), 400
    if not db.authenticate(user, password):
        return jsonify({"message": "Invalid username or password"}), 401
    session["user"] = user
    return jsonify({"message": "Logged in successfully"}), 200

The changes I expect to see going forward are:

  • Guidelines and governance for AI-generated code: Programming languages or coding standards may soon explicitly include guidelines, validation rules, or security frameworks tailored to AI-assisted coding, ensuring generated code adheres to secure patterns. Work is already being done to create rules files for improved security.
  • Integrated security checks at the IDE level: IDEs may embed deeper vulnerability scanning and real-time feedback directly within coding processes. Development environments or compilers may also come integrated with validation tools specifically attuned to potential weaknesses inherent in AI-generated code.
  • Increased reliance on static/dynamic security analysis tools: Enhanced automated scanning integrated into CI/CD pipelines, detecting flaws pre-deployment.
  • Keeping people in the loop: Security awareness should be a top priority, so everyone is ready for the worst-case scenario.
  • Proving and improving skills: Developers’ training should increasingly emphasize secure coding, particularly when assisted by AI tools.

At the current stage, thorough PR reviews are more crucial than ever. As with any code, always review and test AI-generated code, especially for security-sensitive logic. AI is a tool, not an auditor.

Security: Not a feature, but a default

Application security has come a long way from the Wild West days of the early internet. The biggest shift in the software development landscape is that security is no longer a bolt-on. 

Modern programming languages and ecosystems try to make the secure path the easy path. Defaults are safe (escaped output, parameterized queries), dangerous operations are noisy (compiler warnings, explicit function names), and security updates are automated (thanks to package managers and CI/CD integrations).

Of course, attackers are creative, and the landscape is always shifting. But the evolution of programming languages, as well as the surrounding tools and communities, means developers have a fighting chance. 

While we face newer threats like AI-generated code vulnerabilities and supply chain compromise, the foundations are getting stronger. The key is to keep learning, stay skeptical, and use the tools at your disposal properly.

The biggest shift I would like to see is the human element no longer being viewed as the “weakest link”. The moles never stop popping up, but now, at least, we have better mallets – and the resources to help us use them.

Updated 20 hours ago
Version 2.0
No CommentsBe the first to comment