Domain Explainers and AI

Blog on domain explainers that can be achieved through LLM

Created using Grok AI

Any mention of AI in this post refers to LLM

In par with universal explainers, with the use of LLM, a human can explain all the things under a domain. For example, an AppSec engineer cannot explain IoT security because the engineer does not have the knowledge at that time. With the use of AI, an AppSec engineer can get the knowledge or delegate to AI to do the IoT security.

Though the present LLM is not universal explainers , AGI’s are universal explainers. AGI can think and create explanations of knowledge.

How do you confirm that the bug does not exist in a system?

To know something does not exist is to know what all exists. Being omniscient.

AI is omniscient, like GOD. It has all knowledge known to mankind under the universe. So AI knows what all exists. AI can prove that something does not exist.

For example, if you want to prove the lion does not exist in a forest, how do you prove it? You have to prove that there are no attacks of the lion done on other dead animals, no footprints of a lion. Basically, you need the existence of dead animals, existence of footprints, existence of smell, existence of poo. But the probability of proving what does not exist is difficult because you have to find/know all possibilities.

How do you prove a remote execution bug does not exist in your software?

Will you try to attack and send RCE payloads at every endpoint in your software? A better option is to try to see what all exists. Try to see what RCE-causing configurations exist in the software. Try to see any bash scripts with unvalidated inputs. This check on existence looks infinite. But the domain knowledge is finite, maybe largely finite. With AI and in the future, I believe we can reach a stage where we can test out all the existence and prove that an RCE bug does not exist (or) prove that there no security bugs in a software.

Reply

or to participate.