Every industry changes when tools are changing
Over time, each industry standardizes, and with it the ways of delivering services change — so that they are faster and cheaper to implement. Once all furniture was created by hand to order, today production lines create ready-made furniture sets for us in a few hours, and with the introduction of modular systems practically tailored to our individual needs.
As a rule, such changes take place in specific areas of business in an evolutionary way. However, it may turn out that on the occasion of unexpected events like the COVID-19 pandemic, change is happening in a revolutionary way overnight. So it was with our shopping habits: before the lockdown, going to the store to make a purchase was a matter of course, but in times of pandemic, sales had to move to the Internet. Then there was a sudden rash of online stores (e-commerce). The pandemic has stopped, but our consumer habits of online shopping have remained and become the new standard. Today, no self-respecting brand, be it clothing, home appliances or any other, tries to conquer the market without the use of online channels.
Changes in business are inevitable and are dictated by the following cause-and-effect sequence:
- with the popularization of the service/product, the number of people willing to use it increases,
- as demand increases, supply begins to not keep up with production,
- when the supply does not keep up, the prices of these products and services rise,
- as prices rise more and more companies enter the industry — the industry begins to complain about the lack of hands to work and about a lot of competition,
- to address these shortcomings, companies in the industry are beginning to look for new solutions and tools,
- with the emergence of new tools, new workplaces appear, and old ones go into oblivion (e.g. blacksmith),
- at first, new solutions are met with a lack of trust, then with resistance from the industry itself, which sees it as a threat, until finally companies see that in order to survive they must either adapt or close their businesses,
- at some point, the industry adapts the solution and it becomes the new standard,
- and then the cycle starts all over again...
History of traditional programming cycles
The software industry has gone through many cycles of evolution and revolution, driven by the growing demand for more efficient, faster and cheaper solutions for developing applications and systems. Each stage introduced new tools, changed the way developers worked, and redefined the standards of traditional methods. Here is an overview of key cycles in the history of programming, from perforated cards to modern no-code/low-code (NCLC) platforms.
Punched cards and machine code (1940s-1950s)
At the very beginning, programming was a tedious, manual process. In the 1940s and 1950s, programmers used punched cards to input instructions directly into computers such as ENIAC. The code was written in machine language, i.e. sequences of zeros and ones, specific to a given machine.
Demand and problem: with the development of the first computers in scientific and government institutions, there was a need for more complex calculations. However, programming in machine language was time-consuming, required tremendous precision, and was prone to errors.
Solution: the introduction of assemblers in the 1950s, which translated more readable instructions (mnemonics) into machine code. This accelerated the programming process and reduced the number of errors.
Effect: assemblers are a digital innovation that has become standard, but still required expertise. The industry began to feel a lack of qualified programmers, which started another cycle.
High-level languages (1950s-1970s)
With the popularization of computers in business and science, there was a need to implement more affordable programming tools. Machine languages and assemblers were too complicated for a growing number of users.
Demand and problem: Companies like IBM wanted to build software faster and cheaper to support growing business needs, such as accounting systems and databases. Programming in the assembler was still too slow and required specialized knowledge.
Solution: introduction of high-level languages such as Fortran (1957), COBOL (1959) or C (1972). These languages were more abstract, resembled natural language, and allowed developers to focus on logic instead of hardware specifics.
Effect: writing code in high-level languages has revolutionized the industry. Programming has become more accessible and the number of programmers has increased. However, with the development of more complex applications, new challenges have arisen, such as the management of large projects and their scalability.
Structured and object-oriented programming (70s-1990s)
As software became more complex, there were problems with maintaining code and managing large teams of developers.
Demand and problem: In the 1970s and 1980s, applications such as operating systems or business software required hundreds of thousands of lines of code. Code written in a chaotic manner (the so-called “spaghetti code”) was difficult to debug and modify. The rising cost of software maintenance has become a problem.
Solution: introduction of structural (e.g., Pascal, 1970) and object-oriented (e.g. Smalltalk, C++ in 1983, Java in 1995) programming paradigms. Structured programming promoted code readability and modularity, and object-oriented programming introduced concepts of classes and objects, making it easier to reuse code and manage complex systems.
Effect: these paradigms have become standard in education and industry. However, the development of graphical user interfaces (GUI) and the internet in the 1990s increased the demand for faster application development, which exposed the limitations of traditional programming.
Frameworks, RAD tools, mobile applications and finished components (90s—2010)
The internet boom of the 1990s and the growing popularity of web applications (web development) created pressure for even faster software implementations. Companies wanted to bring new products to market at lightning speed in order to gain a competitive advantage.
Demand and problem: deploying mobile applications or websites from scratch in languages such as Java or C++ was too time-consuming. Programmers had to write a lot of repetitive code, for example to handle user interfaces or communicate with databases.
Solution: frameworks (e.g. Ruby on Rails, 2004; Django, 2005) and Rapid Application Development (RAD) tools such as Visual Basic or Delphi and mobile application programming languages (Apple Swift, Kotlin, ReactNative) have appeared. These tools provided ready-made components and libraries that accelerated development and partially enabled drag-and-drop application development.
Effect: RAD and mobile frameworks and tools have become the standard in the deployment of desktop, mobile applications and web design. However, the functionality they offered still required programming skills, which limited the availability of writing code for non-technical users.
Low-code and no-code development — the next cycle of application design
What is no-code and low-code in software development?
With the digitalization of business, the COVID-19 pandemic and the growing demand for mobile applications and e-commerce, there has been an increased emphasis on more accessible methods to quickly create a functional application. Traditional methods, even with the use of frameworks, no longer kept up with demand.
Demand and problem: small and medium-sized companies, startups and non-technical users (so-called citizen developers) needed tools that would allow them to automate business processes (workflows) and create applications without having to know how to code. At the same time, IT departments in large companies faced backlogs in the implementation of projects.
Solution: No-code/low-code (NCLC) platforms such as Bubble, Webflow, and Zapier now allow you to build applications with visual interfaces, ready-to-use components and minimal code. No-code focuses on non-technical users, offering no-code platforms that allow users to create applications without writing even a single line of code, while low code speeds up the work of developers, allows them to build applications and systems with a greater degree of complexity.
Effect: NCLC platforms are revolutionizing the market by democratizing software development. In contrast to 2024, in 2025 it is estimated that more than 70% of new business applications will be created using these technologies. The pandemic has accelerated their adoption, just as changes in shopping habits have accelerated the development of e-commerce.
NCLC Automation and Artificial Intelligence (AI)
What more can the NCLC platform easily combined with LLM models (Large Language Model) such as Claude or ChatGPT, arming our applications with a powerful tool for data analysis and decision making. What used to require thousands of lines of traditional software code is now being replaced by a simple integration of two complementary solutions. However, with such great opportunities comes a huge responsibility as well.
Despite the advantages of no-code and low-code platforms — what are the disadvantages of software development in no-code technology?
No code/low code (NCLC) platforms are revolutionizing application deployment and business process automation, but as with any technology cycle, new challenges and concerns arise.
Cybersecurity and No-Code Tools
No-code developers are usually non-technical users so the applications they create may be exposed to security vulnerabilities such as improper data management, lack of encryption or vulnerabilities in ready-made no-code tools. The speed of application development can lead to skipping good security practices, and doing them without the supervision of IT people often bypasses security standards, increasing the risk of cyberattacks.
For example, in one of our projects for a customer from the manufacturing industry, we created automation of bidding in which we used the n8n low-code platform to automate this process. The platform had integration via API (Application Programming Interface) with an artificial intelligence model, which in turn was equipped with the expertise needed to analyze the input data received from the client and prepare the valuation of the works on this basis. However, the client was very keen to ensure that this knowledge, especially information about the prices of services and the personal data of the customers, did not leak to people outside the narrow circle of the sales department. On the other hand, however, the client did not want to use the private AI model, stored on its own infrastructure, because the costs associated with its maintenance were unacceptable for the client. Therefore, in order to address the customer's requirements, we have implemented the following security mechanisms:
- anonymization of data - before the automation enters the data received from the client into the AI model, complete anonymization of all personal data occurs, in such a way that the model does not know anything about them,
- algorithm for calculating valuations outside of AI - the AI model also does not know how much it actually costs. Its task is only to analyze the input data and extract from them the values required for the preparation of the valuation. The valuation itself is done outside the model, but based on the data extracted by the AI,
- automation pentests - before making the application available to the client, we additionally carried out security tests (so-called penetration tests) of automation according to the OWASP TOP 10 standard. This was a kind of final verification (so-called sanity-check), assuring both us and the client that the data that was feared to be leaked remains safe.
QA testing and management in no-code/low-code applications
In the no-code/low-code industry, WYSIWYG graphic editors - What You See Is What You Get are used in most cases, especially in no-code solutions. They hide the complexity of the application from its creator. Therefore, the average no-code enthusiast who wants to automate some process or use no-code technology to create an application will not write a single line of code. Instead, it will create flowcharts and configure individual actions in WYSIWYG editors. For this reason, especially among non-technical people, it is very difficult to prepare an application in such a way that it is easily testable. As a rule, all configurations are “rigidly killed” in such applications and therefore it is difficult to take care of QA (Quality Assurance) control or perform automatic tests to assure us that our application works without reservation at all times.
In one of our projects, the client requested that the application automatically transfer emails sent by customers to the address specified on their website to the database. However, as it turned out, in the event that the mail server was undergoing various kinds of updates or there were service interruptions, the connection between the no-code application for transferring messages and the server was broken. However, after the server resumed operation, the connection was still suspended, and as a result, this could result in the client losing leads and not even knowing about it. To address this issue, we have implemented the following improvements:
- change the configuration of the mail server — we have configured the mail server so that after receiving the message it notifies the no-code system of the arrival of a new email (Z-Push mechanism), so it no longer needs any active connection to the mail server,
- implementation of automated tests — every night the test machine sent an email to the mail server address and checked whether the no-code system would transfer it. Unfortunately, no-code is an approach that sometimes turns out to be insufficient and so it was in this case too, because the default NCLC solutions did not allow in the middle of the process to query the mail server, wait a certain time and, in the absence of receiving an email, perform some action, such as sending a notification on Slack that something was not working. Therefore, for testing purposes, we used a low-code approach in which the missing fragment, not supported by ready-made components, was supplemented with a fragment of normal programming code (so-called full-code).
Scalability of no-code technology in companies
Even the best no-code/low-code applications that are well prepared from the business side or UX (the so-called User eXperience) can have problems related to scalability and performance. The reason is very often identical to the problems related to the aforementioned shortcomings, namely the lack of technical knowledge of those preparing applications based on the NCLC.
Once a customer from the e-commerce industry came to us with a problem of data migration. He wanted to transfer the data from the old e-commerce system to the new one we were creating for him and initially commissioned a person from his Digital Sales department to transfer this data to a person from his Digital Sales department, who had already been involved in creating simple automations as part of the Sales & Marketing activity. However, as it turned out when trying to migrate data, the system very quickly ceased to be scalable and the data transfer would simply take too long. Coupled with another problem, which was inconsistency and errors in the data from the old system, there was a situation where the import of the data lasted several hours, after which it was interrupted by an error that had to be corrected and the import resumed again. After 3 weeks of unsuccessful attempts to postpone the completion of the import, the customer asked us to analyze the problem. As part of our visual inspection, it turned out that the rollover of each record took place in a separate database transaction. From the point of view of the editor of the no-code platform, which was used by the Digital Sales department, it was completely invisible that the system opened a database transaction each time, sent 1 record there and then closed the transaction and did so several hundred thousand times in a loop. After detecting the root cause, we modified the automation with a solution that flips all the data in one transaction, that is, opens a transaction, flips hundreds of thousands of records and only closes transactions once. As a result, this reduced data migration time by more than 90%.
License models for platforms no code
As many NCLC platforms exist so many there are licensing models. Some platforms use a Pay-As-You-Go approach (e.g. Zapier or n8n) other issues related to shared or dedicated infrastructure (e.g. Xano), still others charge per-user fees and a set of functionalities (e.g. Airtable or Clickup). And even within the framework of one type of settlement, each of the platforms does it in its own way. Therefore, when choosing the right technology, one should not only look through the prism of the technologies available in it, but also through their licensing models and perform a TCO (Total Cost of Ownership) analysis in which we assess what the costs are:
- development - related to the development of software,
- maintenance - related to the current maintenance of the solution (servers, sys-admin, 1-3 support line),
- licensing - related to license fees.
As a rule, it is the case that the higher the development and maintenance costs, the lower the license fees and vice versa, but there is not always such a regularity. However, by analyzing the customer's requirements well and knowing the price lists of individual platforms, you can cleverly manipulate them. This was also the case in one of our projects for a large private university. We created university course management system, which was to be used by both teachers and students. The ideal solution that could be used by universities seemed to be Airtable, but due to the large number of users (lecturers and students), the licensing costs were unacceptable for the client. After a deeper analysis of the requirements, we were able to design a solution to limit the number of privileged users (editors) having access to table views with editing options, to just a few people. We provided other users with special reading interfaces and data entry forms so that they could operate on free accounts, which consequently drastically reduced the licensing costs for our client. In addition, taking into account that it was a university, in Airtable the company was entitled to an additional discount of 50% which made the offer even more attractive. At first glance, it might seem that a better solution for the client would be to use alternatives that could extend the development work, while reducing licensing costs. However, thanks to a conscious analysis of actual requirements and pricing plans, you can deftly navigate reducing all 3 cost components.
Does your company need a regular developer or just a citizen developer?
No-code/low-code (NCLC) platforms are revolutionizing software development, enabling companies to deliver business value faster and cheaper than traditional programming methods. With visual interfaces and off-the-shelf components, applications are built in a fraction of the time and development costs are significantly lower, confirming Gartner's forecast that by 2025, more than 70% of new business applications will be based on NCLC. However, in order to realize their full potential, it is impossible to completely abandon technical knowledge. Experts in servers, cybersecurity or artificial intelligence are essential to ensure scalability, security and optimization of solutions. Pragmatic approach, based on requirements analysis, experience in the implementation of both full-code and no-code/low-code projects and knowledge of license models of NCLC platforms, allows you to choose the right technology, minimizing production, maintenance and licensing costs. When choosing an agency to implement NCLC projects, it is worth choosing a partner that combines the speed of NCLC platforms with technical precision, guaranteeing applications that are not only efficient, but also secure and scalable.