- Random Access Memory
- Posts
- Weekly Short Read #3
Weekly Short Read #3
Weekly Product Security Newsletter to be updated on what's happening
This is for free subscribers and is only available by email. I will share weekly blog posts and book articles directly to your inbox, providing all the links product security engineers need to stay updated on what's happening across various domains. If you enjoy these weekly short reads, please post on social media to show support for the RAM newsletter. I appreciate your time spent here.
LitmusChaos audit report. I was looking into the threat model done by the 7ASecurity team. It would be a good reference to us to follow
LLM Testing Tool which is used by QA for assurance. can we leverage it for security tests?
Good episode on pytm from the project creator
This is a good idea. Combining code coverage with sast tool to avoid scanning on dead code.
Google used Gemini AI to perform vulnerability testing. This was a good read
LLM orchestrator
Enumeration of private tld
Book Notes
This time, I’m sharing the notes I’ve taken from a few chapters of “Management 3.0: Leading Agile Developers” by Jurgen Appelo. It's my recommended book for anyone who wants to understand agile methodologies and how to effectively manage teams. If you're looking to explore agile leadership in depth, this book is a must-read!. Also it helps you to understand security as security is a complex subject to implement with many moving parts
The idea that things happen as we’ve planned has its roots in our innate preference for causal determinism. This is the notion that future events are necessitated by past and present events combined with the laws of nature. Causal determinism tells us that each thing that happens is caused by other things that happened before. Logically, this means that if we know all about our current situation and we know all variants of one thing leading to another, we can predict future events by calculating them from prior events and natural laws. You can catch a ball when it is thrown at you because you can predict in which direction it is going. It’s how you know what little will be left of your monthly salary after going out with your friends, or how you learned the best ways to make your brother or sister mad and get away with it. But strange as it seems, causality is not enough. Although we can predict the return of a comet and the behavior of software in production, we cannot accurately predict next month’s weather. Neither can we predict the full combination of features, qualities, time, and resources of a software project, or the time of arrival of new customers. Complexity frequently turns interactions between you and the world into an unpredictable and unmanageable mess, full of unexpected issues and surprises. Unfortunately, we are faced with a slight inconvenience when applying complexity theory to problem-solving: Our minds prefer causality over complexity. Our minds are wired to favor what I call “linear thinking” (assuming predictability in cause and effect) over “nonlinear thinking” (assuming things are more complex than that). We are accustomed to stories being told linearly, from start to finish. School taught us linear equations and largely ignored the much more ubiquitous nonlinear equations simply because they’re too hard to solve. We accept “he did it” much more easily than “well, some things just happen.”
The approach of deconstructing systems into their parts and analyzing how these parts interact to make up the whole is called reductionism. Holism is the idea that the behavior of a system cannot be fully determined by its component parts alone. Instead, the system as a whole determines in an important way how the system behaves. It is often seen as the opposite of reductionism, although complexity scientists believe that complexity is the bridge between the two, and both are necessary but insufficient. Even though we can apply reductionism to trace a problem back to its origins, interestingly enough, we cannot apply a constructionist approach to build a system that prevents such problems from happening in the first place. For example, we can figure out why a human heart fails (reductionism) but we can never create a heart that won’t fail. There is plenty of value in root-cause analysis. It helps you fix problems that have already happened, so they won’t happen again. But it won’t help you predict what will go wrong in the future.
The approach of deconstructing systems into their parts grew out of discontent with the many failures of the deterministic approach to software development, where tight control, upfront design, and top-down planning resulted in many intensively managed but disastrously performing software projects. Any attempt to create one model to fully describe a class of complex systems will always fail. It is a topic that I touch upon in Chapter 16, “All Is Wrong, But Some Is Useful,” and one that made me feel a wave of relief when I discovered it: It’s not possible. Great! That means I can work on something else! I can hardly think of a better example of failing early.
The goal of visual thinking is to make the complex understandable by making it visible, not by making it simple. However, the warning “not to make things simple” seems to me, again, a confusion of terms. What is meant is that pictures should not change the complexity (behavior, meaning) of something because that would mess up people’s ability to predict what the pictures are trying to say. Therefore, by all means, simplify everything that is hard to understand. But be careful not to linearize (“simplify”) something because the reduced behavior of what you offer may not be what your user had expected.
Innovation is a typical bottom-up phenomenon. It emphasizes that innovation is doomed to fail when launched by upper-management as top-down programs of special people assigned with the exclusive and difficult task of inventing something new. This approach reflects the causal deterministic view of trying to take charge of what’s going to happen in the future. It doesn’t work. The complex systems approach says that innovation is not a planned result but an emergent result. However, for things to emerge, there has to be something to emerge out of.Knowledge is a key factor in innovation. Developers, designers, architects, analysts, testers, and all other types of software creators are known to be knowledge workers. The term emphasizes that the main job of many workers is to work with information. Knowledge is seen as the fuel for innovation.
Reply