As your company develops software, you must think of security at every phase. Security cannot be an add-on at the end of project. How do you know if it was done right though? You may need a security code audit. Keene Systems, Inc. can provide this for you.
Below are some common questions* you should ask while reviewing your own code for security flaws:
Q: Upon reviewing the web.config file, are there any authentication and/or authorization rules embedded there that could lead to compromise of the site?
Q: How does the framework and application deal with errors, especially whether detailed error messages are propagated back to the client?
Q: Have debug information and debugging been disabled?
Q: What are the validateRequest and EnableViewStateMac directives set to for the ASP.Net application?
Q: Have the default permission sets on file system and database-based resources, such as configuration files, log files and database tables been established properly?
Q: Is sensitive information such as social security numbers, user credentials or credit card information being transmitted in the clear or stored as plaintext in the database?
Q: Are all cryptographic primitives being used well-known, well-documented and publicly scrutinized algorithms and do key lengths meet industry standards and best practices?
Q: Is the application using rand or Math.Random to generate an authentication token? (This should be flagged as a flaw because these are easily guessable. Instead, the developers should use classes such as SecureRandom or cryptographic APIs like Microsoft CAPI.).
Q: Are strong protocols used to validate the identity of a user or component?
Q: Is there the possibility or potential for authentication attacks such as brute-force or dictionary-based guessing attacks?
Q: Are account lockouts implemented? If so, has the potential for denial of service been considered, that is, can an attacker lock out accounts permanently and, most importantly, can they lock out the administrative accounts?
Q: Is there a password policy? Has it been reviewed for adherence to enterprise or industry requirements and best practices?
Q: Are there appropriate mechanisms to enforce access control on protected resources in the system?
Q: Can a malicious user elevate his or her privilege by changing an authorization token, or can a business-critical piece of data, such as the price of a product in an e-commerce application, be tampered by the attacker?
Q: Is there use of a so-called “admin token”? These are special tokens or flags that, if passed to the application, causes it to launch the administrative interface, disable all security checks or allow unfettered access in some form. Developers typically introduce these to aid in debugging and either forget to take them out from production systems or assume no one will find them. Have these been removed?
Q: How is a user’s session managed within the application?
Q: Can a session token be replayed to impersonate the user?
Q: Do session’s time-out after an extended period of inactivity?
Q: Can a user intrude into the session of another user?
Q: Are the session tokens random and not guessable?
Q: Is all data that comes from outside the trust boundary of a component sanitized and validated? Data sanitization includes type, format, length and range checks. It is especially important to check for how the application deals with non-canonical data, data that is UNICODE encoded.
Q: Is there output validation, which is critical and recommended for dealing with problems such as cross-site scripting?
Q: Is the application to be internationalized or localized for a specific language? If so, have the regular expression validators and the application in general been verified?
Q: Are any instances of SQL queries being constructed dynamically using string concatenation of parameters obtained from the user?
Q: Are stored procedures written in a safe manner (i.e. they do not use string parameters and the exec call to execute other stored procedures)?
Q: Are errors and exceptions dealt with in a secure manner?
Q: Are there any error information saved that could lead to information disclosure?
Q: How “user-friendly” are the security error messages? It is better to give too little information than too much information.
Q: Do these messages clearly indicate the security and usability implications of their decision and are they are provided with enough information to make that decision?
Q: Do exception handlers wrap all security-significant operations such as database operations and cryptography?
Q: Are page- and application-level exception handlers set correctly?
Q: Are all security sensitive operations being logged to create an audit trail? This includes, but is not restricted to, failed and successful logons, impersonation, privilege elevation, change of log setting or clearing the log, cryptographic operations and session lifetime events.
Q: Can log files be modified, deleted or cleared by unauthorized users?
Q: Is too much information being logged, leading to sensitive information disclosure?
Q: Are cross-site scripting tags inserted into the log files, allowing extra vulnerabilities?
for a free consultation