The automation of Software Engineering (SE) tasks using Artificial Intelligence (AI) is growing, with AI increasingly leveraged for project management, modeling, testing, and development. Notably, ChatGPT, an AI-powered chatbot, has been introduced as a versatile tool for code writing and test plan generation. Despite the excitement around AI's potential to elevate productivity and even replace human roles in software development, solid empirical evidence remains scarce. Normally, a software engineer's solution is evaluated against a variety of non-functional requirements such as performance, efficiency, reusability, and usability, among others. This study presents an empirical exploration of the performance of software engineers versus AI on specific development tasks, using an array of quality parameters. Our aim is to enhance the interplay between humans and machines, increase the trustworthiness of AI methodologies, and identify the best performers for each task. In doing so, it also contributes to refining cooperative or human-in-the-loop workflows in the context of software engineering. The study investigates two distinct scenarios: the analysis of ChatGPT-produced code against developer-created code on Leetcode, and the comparison of automated machine learning (Auto-ML) and manual methods in the creation of a control structure for an Internet of Things (IoT) application. Our findings reveal that while software engineers excel in some scenarios, AI performs better in others. This groundbreaking empirical study helps forge a new pathway for collaborative human-machine intelligence where AI's capabilities can augment human skills in software engineering.