From 8411db17841fb34849a50ed9953127c44e41460e Mon Sep 17 00:00:00 2001 From: YeonghyeonKo <46114393+YeonghyeonKO@users.noreply.github.com> Date: Tue, 24 Sep 2024 00:53:38 +0900 Subject: [PATCH] fix typos of docs/plugins (#113348) --- docs/plugins/analysis-icu.asciidoc | 4 +- docs/plugins/analysis-kuromoji.asciidoc | 4 +- docs/plugins/analysis-nori.asciidoc | 2 +- .../creating-stable-plugins.asciidoc | 50 +++++++++---------- docs/plugins/discovery-azure-classic.asciidoc | 2 +- docs/plugins/discovery-gce.asciidoc | 2 +- docs/plugins/integrations.asciidoc | 4 +- docs/plugins/mapper-annotated-text.asciidoc | 2 +- docs/plugins/store-smb.asciidoc | 4 +- 9 files changed, 37 insertions(+), 37 deletions(-) diff --git a/docs/plugins/analysis-icu.asciidoc b/docs/plugins/analysis-icu.asciidoc index f6ca6ceae7ea4..da7efd2843f50 100644 --- a/docs/plugins/analysis-icu.asciidoc +++ b/docs/plugins/analysis-icu.asciidoc @@ -380,7 +380,7 @@ GET /my-index-000001/_search <3> -------------------------- -<1> The `name` field uses the `standard` analyzer, and so support full text queries. +<1> The `name` field uses the `standard` analyzer, and so supports full text queries. <2> The `name.sort` field is an `icu_collation_keyword` field that will preserve the name as a single token doc values, and applies the German ``phonebook'' order. <3> An example query which searches the `name` field and sorts on the `name.sort` field. @@ -467,7 +467,7 @@ differences. `case_first`:: Possible values: `lower` or `upper`. Useful to control which case is sorted -first when case is not ignored for strength `tertiary`. The default depends on +first when the case is not ignored for strength `tertiary`. The default depends on the collation. `numeric`:: diff --git a/docs/plugins/analysis-kuromoji.asciidoc b/docs/plugins/analysis-kuromoji.asciidoc index b1d1d5a751057..fa6229b9f20e8 100644 --- a/docs/plugins/analysis-kuromoji.asciidoc +++ b/docs/plugins/analysis-kuromoji.asciidoc @@ -86,7 +86,7 @@ The `kuromoji_iteration_mark` normalizes Japanese horizontal iteration marks `normalize_kanji`:: - Indicates whether kanji iteration marks should be normalize. Defaults to `true`. + Indicates whether kanji iteration marks should be normalized. Defaults to `true`. `normalize_kana`:: @@ -189,7 +189,7 @@ PUT kuromoji_sample + -- Additional expert user parameters `nbest_cost` and `nbest_examples` can be used -to include additional tokens that most likely according to the statistical model. +to include additional tokens that are most likely according to the statistical model. If both parameters are used, the largest number of both is applied. `nbest_cost`:: diff --git a/docs/plugins/analysis-nori.asciidoc b/docs/plugins/analysis-nori.asciidoc index 1a3153fa3bea5..369268bcef0cd 100644 --- a/docs/plugins/analysis-nori.asciidoc +++ b/docs/plugins/analysis-nori.asciidoc @@ -447,7 +447,7 @@ Which responds with: The `nori_number` token filter normalizes Korean numbers to regular Arabic decimal numbers in half-width characters. -Korean numbers are often written using a combination of Hangul and Arabic numbers with various kinds punctuation. +Korean numbers are often written using a combination of Hangul and Arabic numbers with various kinds of punctuation. For example, 3.2천 means 3200. This filter does this kind of normalization and allows a search for 3200 to match 3.2천 in text, but can also be used to make range facets based on the normalized numbers and so on. diff --git a/docs/plugins/development/creating-stable-plugins.asciidoc b/docs/plugins/development/creating-stable-plugins.asciidoc index c9a8a1f6c7e2a..9f98774b5a761 100644 --- a/docs/plugins/development/creating-stable-plugins.asciidoc +++ b/docs/plugins/development/creating-stable-plugins.asciidoc @@ -1,8 +1,8 @@ [[creating-stable-plugins]] === Creating text analysis plugins with the stable plugin API -Text analysis plugins provide {es} with custom {ref}/analysis.html[Lucene -analyzers, token filters, character filters, and tokenizers]. +Text analysis plugins provide {es} with custom {ref}/analysis.html[Lucene +analyzers, token filters, character filters, and tokenizers]. [discrete] ==== The stable plugin API @@ -10,7 +10,7 @@ analyzers, token filters, character filters, and tokenizers]. Text analysis plugins can be developed against the stable plugin API. This API consists of the following dependencies: -* `plugin-api` - an API used by plugin developers to implement custom {es} +* `plugin-api` - an API used by plugin developers to implement custom {es} plugins. * `plugin-analysis-api` - an API used by plugin developers to implement analysis plugins and integrate them into {es}. @@ -18,7 +18,7 @@ plugins and integrate them into {es}. core Lucene analysis interfaces like `Tokenizer`, `Analyzer`, and `TokenStream`. For new versions of {es} within the same major version, plugins built against -this API do not need to be recompiled. Future versions of the API will be +this API does not need to be recompiled. Future versions of the API will be backwards compatible and plugins are binary compatible with future versions of {es}. In other words, once you have a working artifact, you can re-use it when you upgrade {es} to a new bugfix or minor version. @@ -48,9 +48,9 @@ require code changes. Stable plugins are ZIP files composed of JAR files and two metadata files: -* `stable-plugin-descriptor.properties` - a Java properties file that describes +* `stable-plugin-descriptor.properties` - a Java properties file that describes the plugin. Refer to <>. -* `named_components.json` - a JSON file mapping interfaces to key-value pairs +* `named_components.json` - a JSON file mapping interfaces to key-value pairs of component names and implementation classes. Note that only JAR files at the root of the plugin are added to the classpath @@ -65,7 +65,7 @@ you use this plugin. However, you don't need Gradle to create plugins. The {es} Github repository contains {es-repo}tree/main/plugins/examples/stable-analysis[an example analysis plugin]. -The example `build.gradle` build script provides a good starting point for +The example `build.gradle` build script provides a good starting point for developing your own plugin. [discrete] @@ -77,29 +77,29 @@ Plugins are written in Java, so you need to install a Java Development Kit [discrete] ===== Step by step -. Create a directory for your project. +. Create a directory for your project. . Copy the example `build.gradle` build script to your project directory. Note that this build script uses the `elasticsearch.stable-esplugin` gradle plugin to build your plugin. . Edit the `build.gradle` build script: -** Add a definition for the `pluginApiVersion` and matching `luceneVersion` -variables to the top of the file. You can find these versions in the -`build-tools-internal/version.properties` file in the {es-repo}[Elasticsearch +** Add a definition for the `pluginApiVersion` and matching `luceneVersion` +variables to the top of the file. You can find these versions in the +`build-tools-internal/version.properties` file in the {es-repo}[Elasticsearch Github repository]. -** Edit the `name` and `description` in the `esplugin` section of the build -script. This will create the plugin descriptor file. If you're not using the -`elasticsearch.stable-esplugin` gradle plugin, refer to +** Edit the `name` and `description` in the `esplugin` section of the build +script. This will create the plugin descriptor file. If you're not using the +`elasticsearch.stable-esplugin` gradle plugin, refer to <> to create the file manually. ** Add module information. -** Ensure you have declared the following compile-time dependencies. These -dependencies are compile-time only because {es} will provide these libraries at +** Ensure you have declared the following compile-time dependencies. These +dependencies are compile-time only because {es} will provide these libraries at runtime. *** `org.elasticsearch.plugin:elasticsearch-plugin-api` *** `org.elasticsearch.plugin:elasticsearch-plugin-analysis-api` *** `org.apache.lucene:lucene-analysis-common` -** For unit testing, ensure these dependencies have also been added to the +** For unit testing, ensure these dependencies have also been added to the `build.gradle` script as `testImplementation` dependencies. -. Implement an interface from the analysis plugin API, annotating it with +. Implement an interface from the analysis plugin API, annotating it with `NamedComponent`. Refer to <> for an example. . You should now be able to assemble a plugin ZIP file by running: + @@ -107,22 +107,22 @@ runtime. ---- gradle bundlePlugin ---- -The resulting plugin ZIP file is written to the `build/distributions` +The resulting plugin ZIP file is written to the `build/distributions` directory. [discrete] ===== YAML REST tests -The Gradle `elasticsearch.yaml-rest-test` plugin enables testing of your -plugin using the {es-repo}blob/main/rest-api-spec/src/yamlRestTest/resources/rest-api-spec/test/README.asciidoc[{es} yamlRestTest framework]. +The Gradle `elasticsearch.yaml-rest-test` plugin enables testing of your +plugin using the {es-repo}blob/main/rest-api-spec/src/yamlRestTest/resources/rest-api-spec/test/README.asciidoc[{es} yamlRestTest framework]. These tests use a YAML-formatted domain language to issue REST requests against -an internal {es} cluster that has your plugin installed, and to check the -results of those requests. The structure of a YAML REST test directory is as +an internal {es} cluster that has your plugin installed, and to check the +results of those requests. The structure of a YAML REST test directory is as follows: -* A test suite class, defined under `src/yamlRestTest/java`. This class should +* A test suite class, defined under `src/yamlRestTest/java`. This class should extend `ESClientYamlSuiteTestCase`. -* The YAML tests themselves should be defined under +* The YAML tests themselves should be defined under `src/yamlRestTest/resources/test/`. [[plugin-descriptor-file-stable]] diff --git a/docs/plugins/discovery-azure-classic.asciidoc b/docs/plugins/discovery-azure-classic.asciidoc index aa710a2fe7ef9..b8d37f024172c 100644 --- a/docs/plugins/discovery-azure-classic.asciidoc +++ b/docs/plugins/discovery-azure-classic.asciidoc @@ -148,7 +148,7 @@ Before starting, you need to have: -- You should follow http://azure.microsoft.com/en-us/documentation/articles/linux-use-ssh-key/[this guide] to learn -how to create or use existing SSH keys. If you have already did it, you can skip the following. +how to create or use existing SSH keys. If you have already done it, you can skip the following. Here is a description on how to generate SSH keys using `openssl`: diff --git a/docs/plugins/discovery-gce.asciidoc b/docs/plugins/discovery-gce.asciidoc index 2e8cff21208e0..0a2629b7f094b 100644 --- a/docs/plugins/discovery-gce.asciidoc +++ b/docs/plugins/discovery-gce.asciidoc @@ -478,7 +478,7 @@ discovery: seed_providers: gce -------------------------------------------------- -Replaces `project_id` and `zone` with your settings. +Replace `project_id` and `zone` with your settings. To run test: diff --git a/docs/plugins/integrations.asciidoc b/docs/plugins/integrations.asciidoc index 71f237692ad35..aff4aed0becd2 100644 --- a/docs/plugins/integrations.asciidoc +++ b/docs/plugins/integrations.asciidoc @@ -91,7 +91,7 @@ Integrations are not plugins, but are external tools or modules that make it eas Elasticsearch Grails plugin. * https://hibernate.org/search/[Hibernate Search] - Integration with Hibernate ORM, from the Hibernate team. Automatic synchronization of write operations, yet exposes full Elasticsearch capabilities for queries. Can return either Elasticsearch native or re-map queries back into managed entities loaded within transaction from the reference database. + Integration with Hibernate ORM, from the Hibernate team. Automatic synchronization of write operations, yet exposes full Elasticsearch capabilities for queries. Can return either Elasticsearch native or re-map queries back into managed entities loaded within transactions from the reference database. * https://github.com/spring-projects/spring-data-elasticsearch[Spring Data Elasticsearch]: Spring Data implementation for Elasticsearch @@ -104,7 +104,7 @@ Integrations are not plugins, but are external tools or modules that make it eas * https://pulsar.apache.org/docs/en/io-elasticsearch[Apache Pulsar]: The Elasticsearch Sink Connector is used to pull messages from Pulsar topics - and persist the messages to a index. + and persist the messages to an index. * https://micronaut-projects.github.io/micronaut-elasticsearch/latest/guide/index.html[Micronaut Elasticsearch Integration]: Integration of Micronaut with Elasticsearch diff --git a/docs/plugins/mapper-annotated-text.asciidoc b/docs/plugins/mapper-annotated-text.asciidoc index afe8ba41da9b8..e4141e98a2285 100644 --- a/docs/plugins/mapper-annotated-text.asciidoc +++ b/docs/plugins/mapper-annotated-text.asciidoc @@ -143,7 +143,7 @@ broader positional queries e.g. finding mentions of a `Guitarist` near to `strat WARNING: Any use of `=` signs in annotation values eg `[Prince](person=Prince)` will cause the document to be rejected with a parse failure. In future we hope to have a use for -the equals signs so wil actively reject documents that contain this today. +the equals signs so will actively reject documents that contain this today. [[annotated-text-synthetic-source]] ===== Synthetic `_source` diff --git a/docs/plugins/store-smb.asciidoc b/docs/plugins/store-smb.asciidoc index 8557ef868010f..da803b4f42022 100644 --- a/docs/plugins/store-smb.asciidoc +++ b/docs/plugins/store-smb.asciidoc @@ -10,7 +10,7 @@ include::install_remove.asciidoc[] ==== Working around a bug in Windows SMB and Java on windows When using a shared file system based on the SMB protocol (like Azure File Service) to store indices, the way Lucene -open index segment files is with a write only flag. This is the _correct_ way to open the files, as they will only be +opens index segment files is with a write only flag. This is the _correct_ way to open the files, as they will only be used for writes and allows different FS implementations to optimize for it. Sadly, in windows with SMB, this disables the cache manager, causing writes to be slow. This has been described in https://issues.apache.org/jira/browse/LUCENE-6176[LUCENE-6176], but it affects each and every Java program out there!. @@ -44,7 +44,7 @@ This can be configured for all indices by adding this to the `elasticsearch.yml` index.store.type: smb_nio_fs ---- -Note that setting will be applied for newly created indices. +Note that settings will be applied for newly created indices. It can also be set on a per-index basis at index creation time: