-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
bench: refactor to use string interpolation in assert #9786
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
bench: refactor to use string interpolation in assert #9786
Conversation
Refactor benchmarks to use the format utility for label generation
instead of string concatenation. This ensures consistency with
standard project benchmark naming conventions.
---
type: pre_commit_static_analysis_report
description: Results of running static analysis checks when committing changes.
report:
- task: lint_filenames
status: passed
- task: lint_editorconfig
status: passed
- task: lint_markdown
status: na
- task: lint_package_json
status: na
- task: lint_repl_help
status: na
- task: lint_javascript_src
status: na
- task: lint_javascript_cli
status: na
- task: lint_javascript_examples
status: na
- task: lint_javascript_tests
status: na
- task: lint_javascript_benchmarks
status: na
- task: lint_python
status: na
- task: lint_r
status: na
- task: lint_c_src
status: na
- task: lint_c_examples
status: na
- task: lint_c_benchmarks
status: na
- task: lint_c_tests_fixtures
status: na
- task: lint_shell
status: na
- task: lint_typescript_declarations
status: passed
- task: lint_typescript_tests
status: na
- task: lint_license_headers
status: passed
---
|
Hello! Thank you for your contribution to stdlib. We noticed that the contributing guidelines acknowledgment is missing from your pull request. Here's what you need to do:
This acknowledgment confirms that you've read the guidelines, which include:
We can't review or accept contributions without this acknowledgment. Thank you for your understanding and cooperation. We look forward to reviewing your contribution! |
|
👋 Hi there! 👋 And thank you for opening your first pull request! We will review it shortly. 🏃 💨 Getting Started
Next Steps
Running Tests LocallyYou can use # Run tests for all packages in the math namespace:
make test TESTS_FILTER=".*/@stdlib/math/.*"
# Run benchmarks for a specific package:
make benchmark BENCHMARKS_FILTER=".*/@stdlib/math/base/special/sin/.*"If you haven't heard back from us within two weeks, please ping us by tagging the "reviewers" team in a comment on this PR. If you have any further questions while waiting for a response, please join our Zulip community to chat with project maintainers and other community members. We appreciate your contribution! Documentation Links |
| // MAIN // | ||
|
|
||
| bench( pkg, function benchmark( b ) { | ||
| bench( format( '%s', pkg ), function benchmark( b ) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| bench( format( '%s', pkg ), function benchmark( b ) { | |
| bench( pkg, function benchmark( b ) { |
---
type: pre_commit_static_analysis_report
description: Results of running static analysis checks when committing changes.
report:
- task: lint_filenames
status: passed
- task: lint_editorconfig
status: passed
- task: lint_markdown
status: na
- task: lint_package_json
status: na
- task: lint_repl_help
status: na
- task: lint_javascript_src
status: na
- task: lint_javascript_cli
status: na
- task: lint_javascript_examples
status: na
- task: lint_javascript_tests
status: na
- task: lint_javascript_benchmarks
status: passed
- task: lint_python
status: na
- task: lint_r
status: na
- task: lint_c_src
status: na
- task: lint_c_examples
status: na
- task: lint_c_benchmarks
status: na
- task: lint_c_tests_fixtures
status: na
- task: lint_shell
status: na
- task: lint_typescript_declarations
status: passed
- task: lint_typescript_tests
status: na
- task: lint_license_headers
status: passed
---
|
I have updated the benchmarks for all four packages to address the feedback. I've used pkg directly where no concatenation was originally present and implemented the format utility where string concatenation (+) was used. I've also verified that the benchmarks run correctly locally and that the files pass the internal linting checks. |
Use this version to ensure the formatting is exactly how the maintainers expect it:
Refactor benchmarks to use the format utility for label generation instead of string concatenation. This ensures consistency with standard project benchmark naming conventions and facilitates future linting of benchmark names.
type: pre_commit_static_analysis_report description: Results of running static analysis checks when committing changes. report:
task: lint_filenames status: passed
task: lint_editorconfig status: passed
task: lint_markdown status: na
task: lint_package_json status: na
task: lint_repl_help status: na
task: lint_javascript_src status: na
task: lint_javascript_cli status: na
task: lint_javascript_examples status: na
task: lint_javascript_tests status: na
task: lint_javascript_benchmarks status: passed
task: lint_python status: na
task: lint_r status: na
task: lint_c_src status: na
task: lint_c_examples status: na
task: lint_c_benchmarks status: na
task: lint_c_tests_fixtures status: na
task: lint_shell status: na
task: lint_typescript_declarations status: passed
task: lint_typescript_tests status: na
task: lint_license_headers status: passed
Resolves a part of #8647.
Description
This pull request:
Replaces manual string concatenation with the @stdlib/string/format utility in benchmarks for is-persymmetric-matrix, is-skew-centrosymmetric-matrix, is-skew-persymmetric-matrix, and is-square-matrix.
Adds the @stdlib/string/format import as demonstrated in the RFC.
Verifies that formatted benchmark names match the original names exactly to preserve existing behavior.
Related Issues
This pull request has no other related issues.
Questions
No.
Other
No.
Checklist
AI Assistance
[ ] Yes
[x] No
If you answered "yes" above, how did you use AI assistance?
[ ] Code generation
[ ] Test/benchmark generation
[ ] Documentation (including examples)
[ ] Research and understanding