Skip to content

Commit

Permalink
fix typos of docs/plugins (#113348) (#113404)
Browse files Browse the repository at this point in the history
Co-authored-by: YeonghyeonKo <[email protected]>
  • Loading branch information
leemthompo and YeonghyeonKO authored Sep 23, 2024
1 parent f849aed commit 2fac37d
Show file tree
Hide file tree
Showing 9 changed files with 37 additions and 37 deletions.
4 changes: 2 additions & 2 deletions docs/plugins/analysis-icu.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -380,7 +380,7 @@ GET /my-index-000001/_search <3>
--------------------------

<1> The `name` field uses the `standard` analyzer, and so support full text queries.
<1> The `name` field uses the `standard` analyzer, and so supports full text queries.
<2> The `name.sort` field is an `icu_collation_keyword` field that will preserve the name as
a single token doc values, and applies the German ``phonebook'' order.
<3> An example query which searches the `name` field and sorts on the `name.sort` field.
Expand Down Expand Up @@ -467,7 +467,7 @@ differences.
`case_first`::

Possible values: `lower` or `upper`. Useful to control which case is sorted
first when case is not ignored for strength `tertiary`. The default depends on
first when the case is not ignored for strength `tertiary`. The default depends on
the collation.

`numeric`::
Expand Down
4 changes: 2 additions & 2 deletions docs/plugins/analysis-kuromoji.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ The `kuromoji_iteration_mark` normalizes Japanese horizontal iteration marks

`normalize_kanji`::

Indicates whether kanji iteration marks should be normalize. Defaults to `true`.
Indicates whether kanji iteration marks should be normalized. Defaults to `true`.

`normalize_kana`::

Expand Down Expand Up @@ -189,7 +189,7 @@ PUT kuromoji_sample
+
--
Additional expert user parameters `nbest_cost` and `nbest_examples` can be used
to include additional tokens that most likely according to the statistical model.
to include additional tokens that are most likely according to the statistical model.
If both parameters are used, the largest number of both is applied.

`nbest_cost`::
Expand Down
2 changes: 1 addition & 1 deletion docs/plugins/analysis-nori.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -447,7 +447,7 @@ Which responds with:
The `nori_number` token filter normalizes Korean numbers
to regular Arabic decimal numbers in half-width characters.

Korean numbers are often written using a combination of Hangul and Arabic numbers with various kinds punctuation.
Korean numbers are often written using a combination of Hangul and Arabic numbers with various kinds of punctuation.
For example, 3.2천 means 3200.
This filter does this kind of normalization and allows a search for 3200 to match 3.2천 in text,
but can also be used to make range facets based on the normalized numbers and so on.
Expand Down
50 changes: 25 additions & 25 deletions docs/plugins/development/creating-stable-plugins.asciidoc
Original file line number Diff line number Diff line change
@@ -1,24 +1,24 @@
[[creating-stable-plugins]]
=== Creating text analysis plugins with the stable plugin API

Text analysis plugins provide {es} with custom {ref}/analysis.html[Lucene
analyzers, token filters, character filters, and tokenizers].
Text analysis plugins provide {es} with custom {ref}/analysis.html[Lucene
analyzers, token filters, character filters, and tokenizers].

[discrete]
==== The stable plugin API

Text analysis plugins can be developed against the stable plugin API. This API
consists of the following dependencies:

* `plugin-api` - an API used by plugin developers to implement custom {es}
* `plugin-api` - an API used by plugin developers to implement custom {es}
plugins.
* `plugin-analysis-api` - an API used by plugin developers to implement analysis
plugins and integrate them into {es}.
* `lucene-analysis-common` - a dependency of `plugin-analysis-api` that contains
core Lucene analysis interfaces like `Tokenizer`, `Analyzer`, and `TokenStream`.

For new versions of {es} within the same major version, plugins built against
this API do not need to be recompiled. Future versions of the API will be
this API does not need to be recompiled. Future versions of the API will be
backwards compatible and plugins are binary compatible with future versions of
{es}. In other words, once you have a working artifact, you can re-use it when
you upgrade {es} to a new bugfix or minor version.
Expand Down Expand Up @@ -48,9 +48,9 @@ require code changes.

Stable plugins are ZIP files composed of JAR files and two metadata files:

* `stable-plugin-descriptor.properties` - a Java properties file that describes
* `stable-plugin-descriptor.properties` - a Java properties file that describes
the plugin. Refer to <<plugin-descriptor-file-{plugin-type}>>.
* `named_components.json` - a JSON file mapping interfaces to key-value pairs
* `named_components.json` - a JSON file mapping interfaces to key-value pairs
of component names and implementation classes.

Note that only JAR files at the root of the plugin are added to the classpath
Expand All @@ -65,7 +65,7 @@ you use this plugin. However, you don't need Gradle to create plugins.

The {es} Github repository contains
{es-repo}tree/main/plugins/examples/stable-analysis[an example analysis plugin].
The example `build.gradle` build script provides a good starting point for
The example `build.gradle` build script provides a good starting point for
developing your own plugin.

[discrete]
Expand All @@ -77,52 +77,52 @@ Plugins are written in Java, so you need to install a Java Development Kit
[discrete]
===== Step by step

. Create a directory for your project.
. Create a directory for your project.
. Copy the example `build.gradle` build script to your project directory. Note
that this build script uses the `elasticsearch.stable-esplugin` gradle plugin to
build your plugin.
. Edit the `build.gradle` build script:
** Add a definition for the `pluginApiVersion` and matching `luceneVersion`
variables to the top of the file. You can find these versions in the
`build-tools-internal/version.properties` file in the {es-repo}[Elasticsearch
** Add a definition for the `pluginApiVersion` and matching `luceneVersion`
variables to the top of the file. You can find these versions in the
`build-tools-internal/version.properties` file in the {es-repo}[Elasticsearch
Github repository].
** Edit the `name` and `description` in the `esplugin` section of the build
script. This will create the plugin descriptor file. If you're not using the
`elasticsearch.stable-esplugin` gradle plugin, refer to
** Edit the `name` and `description` in the `esplugin` section of the build
script. This will create the plugin descriptor file. If you're not using the
`elasticsearch.stable-esplugin` gradle plugin, refer to
<<plugin-descriptor-file-{plugin-type}>> to create the file manually.
** Add module information.
** Ensure you have declared the following compile-time dependencies. These
dependencies are compile-time only because {es} will provide these libraries at
** Ensure you have declared the following compile-time dependencies. These
dependencies are compile-time only because {es} will provide these libraries at
runtime.
*** `org.elasticsearch.plugin:elasticsearch-plugin-api`
*** `org.elasticsearch.plugin:elasticsearch-plugin-analysis-api`
*** `org.apache.lucene:lucene-analysis-common`
** For unit testing, ensure these dependencies have also been added to the
** For unit testing, ensure these dependencies have also been added to the
`build.gradle` script as `testImplementation` dependencies.
. Implement an interface from the analysis plugin API, annotating it with
. Implement an interface from the analysis plugin API, annotating it with
`NamedComponent`. Refer to <<example-text-analysis-plugin>> for an example.
. You should now be able to assemble a plugin ZIP file by running:
+
[source,sh]
----
gradle bundlePlugin
----
The resulting plugin ZIP file is written to the `build/distributions`
The resulting plugin ZIP file is written to the `build/distributions`
directory.

[discrete]
===== YAML REST tests

The Gradle `elasticsearch.yaml-rest-test` plugin enables testing of your
plugin using the {es-repo}blob/main/rest-api-spec/src/yamlRestTest/resources/rest-api-spec/test/README.asciidoc[{es} yamlRestTest framework].
The Gradle `elasticsearch.yaml-rest-test` plugin enables testing of your
plugin using the {es-repo}blob/main/rest-api-spec/src/yamlRestTest/resources/rest-api-spec/test/README.asciidoc[{es} yamlRestTest framework].
These tests use a YAML-formatted domain language to issue REST requests against
an internal {es} cluster that has your plugin installed, and to check the
results of those requests. The structure of a YAML REST test directory is as
an internal {es} cluster that has your plugin installed, and to check the
results of those requests. The structure of a YAML REST test directory is as
follows:

* A test suite class, defined under `src/yamlRestTest/java`. This class should
* A test suite class, defined under `src/yamlRestTest/java`. This class should
extend `ESClientYamlSuiteTestCase`.
* The YAML tests themselves should be defined under
* The YAML tests themselves should be defined under
`src/yamlRestTest/resources/test/`.

[[plugin-descriptor-file-stable]]
Expand Down
2 changes: 1 addition & 1 deletion docs/plugins/discovery-azure-classic.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ Before starting, you need to have:
--

You should follow http://azure.microsoft.com/en-us/documentation/articles/linux-use-ssh-key/[this guide] to learn
how to create or use existing SSH keys. If you have already did it, you can skip the following.
how to create or use existing SSH keys. If you have already done it, you can skip the following.

Here is a description on how to generate SSH keys using `openssl`:

Expand Down
2 changes: 1 addition & 1 deletion docs/plugins/discovery-gce.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -478,7 +478,7 @@ discovery:
seed_providers: gce
--------------------------------------------------

Replaces `project_id` and `zone` with your settings.
Replace `project_id` and `zone` with your settings.

To run test:

Expand Down
4 changes: 2 additions & 2 deletions docs/plugins/integrations.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ Integrations are not plugins, but are external tools or modules that make it eas
Elasticsearch Grails plugin.

* https://hibernate.org/search/[Hibernate Search]
Integration with Hibernate ORM, from the Hibernate team. Automatic synchronization of write operations, yet exposes full Elasticsearch capabilities for queries. Can return either Elasticsearch native or re-map queries back into managed entities loaded within transaction from the reference database.
Integration with Hibernate ORM, from the Hibernate team. Automatic synchronization of write operations, yet exposes full Elasticsearch capabilities for queries. Can return either Elasticsearch native or re-map queries back into managed entities loaded within transactions from the reference database.

* https://github.com/spring-projects/spring-data-elasticsearch[Spring Data Elasticsearch]:
Spring Data implementation for Elasticsearch
Expand All @@ -104,7 +104,7 @@ Integrations are not plugins, but are external tools or modules that make it eas

* https://pulsar.apache.org/docs/en/io-elasticsearch[Apache Pulsar]:
The Elasticsearch Sink Connector is used to pull messages from Pulsar topics
and persist the messages to a index.
and persist the messages to an index.

* https://micronaut-projects.github.io/micronaut-elasticsearch/latest/guide/index.html[Micronaut Elasticsearch Integration]:
Integration of Micronaut with Elasticsearch
Expand Down
2 changes: 1 addition & 1 deletion docs/plugins/mapper-annotated-text.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ broader positional queries e.g. finding mentions of a `Guitarist` near to `strat

WARNING: Any use of `=` signs in annotation values eg `[Prince](person=Prince)` will
cause the document to be rejected with a parse failure. In future we hope to have a use for
the equals signs so wil actively reject documents that contain this today.
the equals signs so will actively reject documents that contain this today.

[[annotated-text-synthetic-source]]
===== Synthetic `_source`
Expand Down
4 changes: 2 additions & 2 deletions docs/plugins/store-smb.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ include::install_remove.asciidoc[]
==== Working around a bug in Windows SMB and Java on windows

When using a shared file system based on the SMB protocol (like Azure File Service) to store indices, the way Lucene
open index segment files is with a write only flag. This is the _correct_ way to open the files, as they will only be
opens index segment files is with a write only flag. This is the _correct_ way to open the files, as they will only be
used for writes and allows different FS implementations to optimize for it. Sadly, in windows with SMB, this disables
the cache manager, causing writes to be slow. This has been described in
https://issues.apache.org/jira/browse/LUCENE-6176[LUCENE-6176], but it affects each and every Java program out there!.
Expand Down Expand Up @@ -44,7 +44,7 @@ This can be configured for all indices by adding this to the `elasticsearch.yml`
index.store.type: smb_nio_fs
----

Note that setting will be applied for newly created indices.
Note that settings will be applied for newly created indices.

It can also be set on a per-index basis at index creation time:

Expand Down

0 comments on commit 2fac37d

Please sign in to comment.