Research and Assessment (DORA)チームが2014年から公開している DevOps業界の年次調査レポート ✓ Google Cloudのチームで執筆 ✓ AI が生成したコードの信頼性 ・開発作業で使用されるものは複雑 ・回答者の大多数(87.9%) はある程度の 信頼を報告 ✓ AI が生成したコードの品質 回答者の全体的(39.2%)の信頼が低い ほとんど信頼していないか(27.3%) まったく信頼していません(11.9%) https://cloud.google.com/devops/state-of-devops より抜粋
control 3. One step build and deploy 4. Feature flags 5. Shared metrics 6. IRC and IM robots Culture 1. Respect 2. Trust 3. Healthy attitude about failure 4. Avoiding Blame 10 deploys per day dev & ops cooperation at Flickr DevOpsの大事なことは、だいたい原点に書いてある(後編) 〜 Developers Summit 2013 Summer(Public Key) Len Bass, et al. 「DevOps教科書」 https://www.slideshare.net/slideshow/10-deploys-per-day-dev-and-o ps-cooperation-at-flickr/1628368 より引用 https://www.publickey1.jp/blog/13/devops_developers_summit _2013_summer_1.html より引用
of ELS-1 Support Phase OpenJDK 6 (1.6) December 2016 N/A OpenJDK 7 (1.7) June 2020 N/A OpenJDK 8 (1.8) Nov 2026* N/A OpenJDK 11 October 2024 October 2027 OpenJDK 17 October 2027 N/A OpenJDK 21 December 2029 N/A https://access.redhat.com/articles/1299013 OpenJDK Lifecycle Dates and RHEL versions Spring Boot の概要とサポート期間 - リファレンスドキュメント
transformation web experience Analytics ✓ Summarizing your data Management and governance ✓ Exploring nodes using text prompts ✓ Investigating operational issues (preview) ✓ Taking inventory of your AWS resources ✓ Use Amazon Q in the AWS Console Mobile Application ✓ Diagnosing console errors Compute ✓ Choosing Amazon Elastic Compute Cloud instances Databases ✓ Writing database queries with natural language Networking and content delivery ✓ Analyzing network troubleshooting Developer tools ✓ Developing code features ✓ Getting inline code suggestions ✓ Chatting about code ✓ Reviewing your code for security vulnerabilities and quality issues ✓ Transforming code ✓ Generating unit tests ✓ Developing software in Amazon CodeCatalyst ✓ Chatting about code in Amazon SageMaker AI Studio ✓ Interacting with command line and AWS CloudShell Application integration ✓ Writing scripts to automate AWS services ✓ Writing ETL scripts and integrating data Third-party tools ✓ Using GitLab Duo with Amazon Q Cloud Financial Management ✓ Understanding your costs Customer support ✓ Getting customer support directly from Amazon Q ✓ Creating a support ticket ✓ Amazon Q in AWS Chatbot
⚫ 商用プラットフォームは、(米)Moderne社が提供 •Common static analysis issue remediation •Automatically fix Checkstyle violations •Migrate to Java 17 •Migrate to JUnit 5 from JUnit 4 •Migrate to Spring Boot 3 from Spring Boot 2 •Migrate to Spring Boot 2 from Spring Boot 1 •Migrate to Quarkus 2 from Quarkus 1 •Migrate to SLF4J from Log4J •Migrate to Jakarta EE 10 レシピの例 public class JUnit5Migration extends Recipe { // Standard recipes descriptions and names ... @Override public List<Recipe> getRecipeList() { return Arrays.asList( new ChangeType("org.junit.Test", "org.junit.jupiter.api.Test", false), new AssertToAssertions(), new RemovePublicTestModifiers() ); } } レシピの実装例
✓ Mandatory method not called after object creatio n ✓ Process empty record list in Amazon KCL ✓ AWS object presence check ✓ Missing timeout check on CountDownLatch.await ✓ Unspecified default value ✓ Device Permission Usage ✓ Deserialization of untrusted object ✓ Preserve thread interruption status rule ✓ Missing check on the value returned by moveToF irst API ✓ Missing timeout check on ExecutorService.awaitT ermination ✓ Overflow when deserializing relational database objects ✓ Custom manual retries of AWS SDK calls ✓ Missing null check for cache response metadata ✓ Inefficient usage of Transaction library from AWS Labs ✓ Insecure connection using unencrypted protocol ✓ Inefficient additional authenticated data (AAD) a uthenticity ✓ Use of a deprecated method ✓ Error-prone AWS IAM policy creation ✓ Use of externally-controlled input to build connec tion string ✓ Inefficient Amazon S3 manual pagination ✓ Mutually exclusive call ✓ AWS Lambda client not reused ✓ Missing check on the result of createNewFile ✓ Sensitive data stored unencrypted due to partial encryption ✓ Missing statement to record cause of InvocationT argetException ✓ Misconfigured Concurrency ✓ Inefficient polling of AWS resource ✓ Improper Initialization ✓ Unexpected re-assignment of synchronized objec ts ✓ XPath injection ✓ AWS client not reused in a Lambda function ✓ Long polling is not enabled in Amazon SQS ✓ Insecure temporary file or directory ✓ HTTP response splitting ✓ Input and output values become out of sync ✓ Server-side request forgery ✓ Missing Authorization for address id ✓ Do not catch and throw exception ✓ Concurrency deadlock ✓ Not recommended aws credentials classes ✓ Path traversal ✓ Override of reserved variable names in a Lambda function ✓ Missing byte array length of JSON parser ✓ Usage of an API that is not recommended ✓ Hardcoded credentials ✓ Insecure JSON web token (JWT) parsing ✓ Not calling finalize causes skipped cleanup steps ✓ Unchecked S3 object metadata content length ✓ Untrusted data in security decision ✓ Permissive cors configuration rule ✓ Insecure cookie ✓ Resource leak ✓ XML External Entity ✓ Bad parameters used with AWS API methods ✓ Missing position check before getting substring ✓ LDAP injection ✓ Avoid reset exception in Amazon S3 ✓ Insecure hashing ✓ Backward compatibility breaks with error messag e parsing ✓ Inefficient map entry iteration ✓ Missing S3 bucket owner condition ✓ AWS DynamoDB getItem output is not null chec ked ✓ Invalid public method parameters ✓ Log injection ✓ Sensitive information leak ✓ Usage of multiple date time pattern formatter ✓ Synchronous publication of AWS Lambda metrics ✓ XML External Entity Document Builder Factory ✓ Improper use of classes that aren't thread-safe ✓ Incorrect null check before setting a value ✓ Insufficient use of name in Amazon SQS queue ✓ Missing check on the value returned by ResultSet. next ✓ Insecure TLS version ✓ Unsanitized input is run as code ✓ Use an enum to specify an AWS Region ✓ Improperly formatted string arguments ✓ Improper service shutdown ✓ Unrestricted upload of dangerous file type ✓ Untrusted AMI images ✓ Insecure SAML parser configuration ✓ Cross-site request forgery ✓ Case sensitive keys in S3 object user metadata ✓ Stack trace not included in re-thrown exception ✓ Region specification missing from AWS client initi alization ✓ Insufficient number of PBEKeySpec iterations ✓ URL redirection to untrusted site ✓ Use of externally-controlled input to select classe s or code ✓ Missing encryption of sensitive data in storage ✓ Ignored output of DynamoDBMapper operations ✓ Null pointer dereference ✓ Cross-site scripting ✓ Unauthenticated LDAP requests ✓ Use of inefficient APIs ✓ Low maintainability with old Android features ✓ Atomicity violation ✓ Missing handling of specifically-thrown exception s ✓ Weak obfuscation of web request ✓ Clear text credentials ✓ Session fixation ✓ Catching and not re-throwing or logging exceptio ns ✓ Missing check when launching an Android activity with an implicit intent ✓ Client constructor deprecation ✓ Inefficient use of stream sorting ✓ Arithmetic overflow or underflow ✓ Simplifiable code ✓ Loose file permissions ✓ Manual pagination ✓ Incorrect string equality operator ✓ Inefficient chain of AWS API calls ✓ OS command injection ✓ Internationalization ✓ Code clone ✓ SQL injection ✓ Missing check on method output ✓ Missing pagination ✓ Resources used by an Amazon S3 TransferManag er are not released ✓ Insecure cryptography ✓ Missing timezone of SimpleDateFormat ✓ Low maintainability with low class cohesion ✓ Oversynchronization ✓ Infinite loop ✓ Batch operations preferred over looping ✓ Object Input Stream Insecure Deserialization ✓ Weak pseudorandom number generation ✓ Insecure CORS policy ✓ Missing handling of file deletion result ✓ Amazon SQS message visibility changed without a status check ✓ State machine execution ARN is not logged ✓ Client-side KMS reencryption ✓ Use Stream::anyMatch instead of Stream::findFir st or Stream::findAny ✓ Batch request with unchecked failures https://docs.aws.amazon.com/codeguru/detector-library/java/ より引用
of benchmarks for various components of the Spr ing Framework, focusing on performance testing and optimization. The benchmarks in this project are designed to measure the performance of specific Spring Framework components and utilities. These benchmarks use the Java Microbenchmark Harness (JMH) to provide accurate and repeatabl e performance measurements. ## Repository Structure ``` . └── java └── org └── springframework ├── core │ ├── codec │ │ └── StringDecoderBenchmark.java │ ├── convert │ │ └── support │ │ └── GenericConversionServiceBenchmark.java │ └── env │ └── CompositePropertySourceBenchmark.java └── util ├── ConcurrentLruCacheBenchmark.java ├── ConcurrentReferenceHashMapBenchmark.java ├── ReflectionUtilsUniqueDeclaredMethodsBenchmark.java └── StringUtilsBenchmark.java ``` The repository is organized into packages that correspond to the Spring Fra mework structure. Each benchmark class is located in its respective package based on the component it tests. ## Usage Instructions ### Prerequisites - Java Development Kit (JDK) 8 or higher - Maven or Gradle (for dependency management and building) ### Running the Benchmarks To run a specific benchmark, use the following command: ``` java -jar spring-core-jmh.jar [options] [benchmark-name] ``` For example, to run the ConcurrentReferenceHashMapBenchmark: ``` java -jar spring-core-jmh.jar -t 30 -f 2 ConcurrentReferenceHashMapBench mark ``` Options: - `-t`: Specifies the number of threads to use - `-f`: Specifies the number of forks (separate JVM instances) ### Benchmark Classes 1. **GenericConversionServiceBenchmark**: Tests the performance of conv erting collections using GenericConversionService. 2. **ConcurrentReferenceHashMapBenchmark**: Compares the performanc e of ConcurrentReferenceHashMap with Collections.synchronizedMap(Map). 3. **CompositePropertySourceBenchmark**: Measures the performance of r etrieving property names from CompositePropertySource. 4. **StringDecoderBenchmark**: Tests the performance of parsing Server-S ent Events (SSE) lines using StringDecoder. 5. **StringUtilsBenchmark**: Benchmarks various utility methods in StringUt ils, such as collectionToDelimitedString and cleanPath. 6. **ConcurrentLruCacheBenchmark**: Measures the throughput of the Con currentLruCache. 7. **ReflectionUtilsUniqueDeclaredMethodsBenchmark**: Tests the perform ance of finding unique declared methods using ReflectionUtils. ## Data Flow The benchmarks typically follow this general flow: 1. Setup: Initialize test data and benchmark state. 2. Execution: Run the benchmarked method or operation. 3. Measurement: JMH measures the performance metrics (e.g., throughput, average time). 4. Results: JMH outputs the benchmark results. ``` [Setup] -> [Execution] -> [Measurement] -> [Results] ``` ## Troubleshooting ### Common Issues 1. **OutOfMemoryError**: - Problem: JVM runs out of memory during benchmark execution. - Solution: Increase the heap size using the `-Xmx` JVM option, e.g., `java -Xmx4g -jar spring-core-jmh.jar ...` 2. **Inconsistent Results**: - Problem: Benchmark results vary significantly between runs. - Solution: Increase the number of measurement iterations and forks to re duce variability. Use options like `-i 10 -f 3`. ### Debugging To enable verbose logging for JMH: ``` java -jar spring-core-jmh.jar -v EXTRA [benchmark-name] ``` JMH log files are typically located in the current working directory with name s like `jmh-result.text` or `jmh-result.json`. ### Performance Optimization - Monitor CPU usage and memory consumption during benchmark execution. - Use profiling tools like VisualVM or YourKit to identify bottlenecks. - Consider warm-up iterations to stabilize JVM performance before measure ment. For specific performance issues, refer to the individual benchmark class doc umentation for tailored optimization strategies.