diff --git a/docs/reference/index.asciidoc b/docs/reference/index.asciidoc
index 79b5f2b69f24d..24dbee8c2983b 100644
--- a/docs/reference/index.asciidoc
+++ b/docs/reference/index.asciidoc
@@ -6,10 +6,10 @@ include::links.asciidoc[]
include::landing-page.asciidoc[]
-include::intro.asciidoc[]
-
include::release-notes/highlights.asciidoc[]
+include::intro.asciidoc[]
+
include::quickstart/index.asciidoc[]
include::setup.asciidoc[]
diff --git a/docs/reference/intro.asciidoc b/docs/reference/intro.asciidoc
index f80856368af2b..831888103c5c1 100644
--- a/docs/reference/intro.asciidoc
+++ b/docs/reference/intro.asciidoc
@@ -1,68 +1,98 @@
[[elasticsearch-intro]]
-== What is {es}?
+== {es} basics
+
+This guide covers the core concepts you need to understand to get started with {es}.
+If you'd prefer to start working with {es} right away, set up a <> and jump to <>.
+
+This guide covers the following topics:
+
+* <>: Learn about {es} and some of its main use cases.
+* <>: Understand your options for deploying {es} in different environments, including a fast local development setup.
+* <>: Understand {es}'s most important primitives and how it stores data.
+* <>: Understand your options for ingesting data into {es}.
+* <>: Understand your options for searching and analyzing data in {es}.
+* <>: Understand the basic concepts required for moving your {es} deployment to production.
+
+[[elasticsearch-intro-what-is-es]]
+=== What is {es}?
{es-repo}[{es}] is a distributed search and analytics engine, scalable data store, and vector database built on Apache Lucene.
It's optimized for speed and relevance on production-scale workloads.
Use {es} to search, index, store, and analyze data of all shapes and sizes in near real time.
+{es} is the heart of the {estc-welcome-current}/stack-components.html[Elastic Stack].
+Combined with https://www.elastic.co/kibana[{kib}], it powers the following Elastic solutions:
+
+* https://www.elastic.co/observability[Observability]
+* https://www.elastic.co/enterprise-search[Search]
+* https://www.elastic.co/security[Security]
+
[TIP]
====
{es} has a lot of features. Explore the full list on the https://www.elastic.co/elasticsearch/features[product webpage^].
====
-{es} is the heart of the {estc-welcome-current}/stack-components.html[Elastic Stack] and powers the Elastic https://www.elastic.co/enterprise-search[Search], https://www.elastic.co/observability[Observability] and https://www.elastic.co/security[Security] solutions.
-
-{es} is used for a wide and growing range of use cases. Here are a few examples:
-
-* *Monitor log and event data*: Store logs, metrics, and event data for observability and security information and event management (SIEM).
-* *Build search applications*: Add search capabilities to apps or websites, or build search engines over internal data.
-* *Vector database*: Store and search vectorized data, and create vector embeddings with built-in and third-party natural language processing (NLP) models.
-* *Retrieval augmented generation (RAG)*: Use {es} as a retrieval engine to augment generative AI models.
-* *Application and security monitoring*: Monitor and analyze application performance and security data.
-* *Machine learning*: Use {ml} to automatically model the behavior of your data in real-time.
-
-This is just a sample of search, observability, and security use cases enabled by {es}.
-Refer to our https://www.elastic.co/customers/success-stories[customer success stories] for concrete examples across a range of industries.
-// Link to demos, search labs chatbots
-
[discrete]
[[elasticsearch-intro-elastic-stack]]
.What is the Elastic Stack?
*******************************
{es} is the core component of the Elastic Stack, a suite of products for collecting, storing, searching, and visualizing data.
-https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current/stack-components.html[Learn more about the Elastic Stack].
+{estc-welcome-current}/stack-components.html[Learn more about the Elastic Stack].
*******************************
-// TODO: Remove once we've moved Stack Overview to a subpage?
[discrete]
+[[elasticsearch-intro-use-cases]]
+==== Use cases
+
+{es} is used for a wide and growing range of use cases. Here are a few examples:
+
+**Observability**
+
+* *Logs, metrics, and traces*: Collect, store, and analyze logs, metrics, and traces from applications, systems, and services.
+* *Application performance monitoring (APM)*: Monitor and analyze the performance of business-critical software applications.
+* *Real user monitoring (RUM)*: Monitor, quantify, and analyze user interactions with web applications.
+* *OpenTelemetry*: Reuse your existing instrumentation to send telemetry data to the Elastic Stack using the OpenTelemetry standard.
+
+**Search**
+
+* *Full-text search*: Build a fast, relevant full-text search solution using inverted indexes, tokenization, and text analysis.
+* *Vector database*: Store and search vectorized data, and create vector embeddings with built-in and third-party natural language processing (NLP) models.
+* *Semantic search*: Understand the intent and contextual meaning behind search queries using tools like synonyms, dense vector embeddings, and learned sparse query-document expansion.
+* *Hybrid search*: Combine full-text search with vector search using state-of-the-art ranking algorithms.
+* *Build search experiences*: Add hybrid search capabilities to apps or websites, or build enterprise search engines over your organization's internal data sources.
+* *Retrieval augmented generation (RAG)*: Use {es} as a retrieval engine to supplement generative AI models with more relevant, up-to-date, or proprietary data for a range of use cases.
+* *Geospatial search*: Search for locations and calculate spatial relationships using geospatial queries.
+
+**Security**
+
+* *Security information and event management (SIEM)*: Collect, store, and analyze security data from applications, systems, and services.
+* *Endpoint security*: Monitor and analyze endpoint security data.
+* *Threat hunting*: Search and analyze data to detect and respond to security threats.
+
+This is just a sample of search, observability, and security use cases enabled by {es}.
+Refer to Elastic https://www.elastic.co/customers/success-stories[customer success stories] for concrete examples across a range of industries.
+
[[elasticsearch-intro-deploy]]
-=== Deployment options
+=== Run {es}
To use {es}, you need a running instance of the {es} service.
-You can deploy {es} in various ways:
+You can deploy {es} in various ways.
-* <>: Get started quickly with a minimal local Docker setup.
-* {cloud}/ec-getting-started-trial.html[*Elastic Cloud*]: {es} is available as part of our hosted Elastic Stack offering, deployed in the cloud with your provider of choice. Sign up for a https://cloud.elastic.co/registration[14-day free trial].
+**Quick start option**
+
+* <>: Get started quickly with a minimal local Docker setup for development and testing.
+
+**Hosted options**
+
+* {cloud}/ec-getting-started-trial.html[*Elastic Cloud Hosted*]: {es} is available as part of the hosted Elastic Stack offering, deployed in the cloud with your provider of choice. Sign up for a https://cloud.elastic.co/registration[14-day free trial].
* {serverless-docs}/general/sign-up-trial[*Elastic Cloud Serverless* (technical preview)]: Create serverless projects for autoscaled and fully managed {es} deployments. Sign up for a https://cloud.elastic.co/serverless-registration[14-day free trial].
-**Advanced deployment options**
+**Advanced options**
* <>: Install, configure, and run {es} on your own premises.
* {ece-ref}/Elastic-Cloud-Enterprise-overview.html[*Elastic Cloud Enterprise*]: Deploy Elastic Cloud on public or private clouds, virtual machines, or your own premises.
* {eck-ref}/k8s-overview.html[*Elastic Cloud on Kubernetes*]: Deploy Elastic Cloud on Kubernetes.
-[discrete]
-[[elasticsearch-next-steps]]
-=== Learn more
-
-Here are some resources to help you get started:
-
-* <>: A beginner's guide to deploying your first {es} instance, indexing data, and running queries.
-* https://elastic.co/webinars/getting-started-elasticsearch[Webinar: Introduction to {es}]: Register for our live webinars to learn directly from {es} experts.
-* https://www.elastic.co/search-labs[Elastic Search Labs]: Tutorials and blogs that explore AI-powered search using the latest {es} features.
-** Follow our tutorial https://www.elastic.co/search-labs/tutorials/search-tutorial/welcome[to build a hybrid search solution in Python].
-** Check out the https://github.com/elastic/elasticsearch-labs?tab=readme-ov-file#elasticsearch-examples--apps[`elasticsearch-labs` repository] for a range of Python notebooks and apps for various use cases.
-
// new html page
[[documents-indices]]
=== Indices, documents, and fields
@@ -73,20 +103,16 @@ Here are some resources to help you get started:
The index is the fundamental unit of storage in {es}, a logical namespace for storing data that share similar characteristics.
After you have {es} <>, you'll get started by creating an index to store your data.
+An index is a collection of documents uniquely identified by a name or an <>.
+This unique name is important because it's used to target the index in search queries and other operations.
+
[TIP]
====
A closely related concept is a <>.
-This index abstraction is optimized for append-only time-series data, and is made up of hidden, auto-generated backing indices.
-If you're working with time-series data, we recommend the {observability-guide}[Elastic Observability] solution.
+This index abstraction is optimized for append-only timestamped data, and is made up of hidden, auto-generated backing indices.
+If you're working with timestamped data, we recommend the {observability-guide}[Elastic Observability] solution for additional tools and optimized content.
====
-Some key facts about indices:
-
-* An index is a collection of documents
-* An index has a unique name
-* An index can also be referred to by an alias
-* An index has a mapping that defines the schema of its documents
-
[discrete]
[[elasticsearch-intro-documents-fields]]
==== Documents and fields
@@ -126,14 +152,12 @@ A simple {es} document might look like this:
[discrete]
[[elasticsearch-intro-documents-fields-data-metadata]]
-==== Data and metadata
+==== Metadata fields
-An indexed document contains data and metadata.
+An indexed document contains data and metadata. <> are system fields that store information about the documents.
In {es}, metadata fields are prefixed with an underscore.
+For example, the following fields are metadata fields:
-The most important metadata fields are:
-
-* `_source`: Contains the original JSON document.
* `_index`: The name of the index where the document is stored.
* `_id`: The document's ID. IDs must be unique per index.
@@ -146,8 +170,8 @@ A mapping defines the <> for each field, how the field
and how it should be stored.
When adding documents to {es}, you have two options for mappings:
-* <>: Let {es} automatically detect the data types and create the mappings for you. This is great for getting started quickly, but can lead to unexpected results for complex data.
-* <>: Define the mappings up front by specifying data types for each field. Recommended for production use cases, because you have much more control over how your data is indexed.
+* <>: Let {es} automatically detect the data types and create the mappings for you. Dynamic mapping helps you get started quickly, but might yield suboptimal results for your specific use case due to automatic field type inference.
+* <>: Define the mappings up front by specifying data types for each field. Recommended for production use cases, because you have full control over how your data is indexed to suit your specific use case.
[TIP]
====
@@ -155,81 +179,207 @@ You can use a combination of dynamic and explicit mapping on the same index.
This is useful when you have a mix of known and unknown fields in your data.
====
+// New html page
+[[es-ingestion-overview]]
+=== Add data to {es}
+
+There are multiple ways to ingest data into {es}.
+The option that you choose depends on whether you're working with timestamped data or non-timestamped data, where the data is coming from, its complexity, and more.
+
+[TIP]
+====
+You can load {kibana-ref}/connect-to-elasticsearch.html#_add_sample_data[sample data] into your {es} cluster using {kib}, to get started quickly.
+====
+
+[discrete]
+[[es-ingestion-overview-general-content]]
+==== General content
+
+General content is data that does not have a timestamp.
+This could be data like vector embeddings, website content, product catalogs, and more.
+For general content, you have the following options for adding data to {es} indices:
+
+* <>: Use the {es} <> to index documents directly, using the Dev Tools {kibana-ref}/console-kibana.html[Console], or cURL.
++
+If you're building a website or app, then you can call Elasticsearch APIs using an https://www.elastic.co/guide/en/elasticsearch/client/index.html[{es} client] in the programming language of your choice. If you use the Python client, then check out the `elasticsearch-labs` repo for various https://github.com/elastic/elasticsearch-labs/tree/main/notebooks/search/python-examples[example notebooks].
+* {kibana-ref}/connect-to-elasticsearch.html#upload-data-kibana[File upload]: Use the {kib} file uploader to index single files for one-off testing and exploration. The GUI guides you through setting up your index and field mappings.
+* https://github.com/elastic/crawler[Web crawler]: Extract and index web page content into {es} documents.
+* {enterprise-search-ref}/connectors.html[Connectors]: Sync data from various third-party data sources to create searchable, read-only replicas in {es}.
+
+[discrete]
+[[es-ingestion-overview-timestamped]]
+==== Timestamped data
+
+Timestamped data in {es} refers to datasets that include a timestamp field. If you use the {ecs-ref}/ecs-reference.html[Elastic Common Schema (ECS)], this field is named `@timestamp`.
+This could be data like logs, metrics, and traces.
+
+For timestamped data, you have the following options for adding data to {es} data streams:
+
+* {fleet-guide}/fleet-overview.html[Elastic Agent and Fleet]: The preferred way to index timestamped data. Each Elastic Agent based integration includes default ingestion rules, dashboards, and visualizations to start analyzing your data right away.
+You can use the Fleet UI in {kib} to centrally manage Elastic Agents and their policies.
+* {beats-ref}/beats-reference.html[Beats]: If your data source isn't supported by Elastic Agent, use Beats to collect and ship data to Elasticsearch. You install a separate Beat for each type of data to collect.
+* {logstash-ref}/introduction.html[Logstash]: Logstash is an open source data collection engine with real-time pipelining capabilities that supports a wide variety of data sources. You might use this option because neither Elastic Agent nor Beats supports your data source. You can also use Logstash to persist incoming data, or if you need to send the data to multiple destinations.
+* {cloud}/ec-ingest-guides.html[Language clients]: The linked tutorials demonstrate how to use {es} programming language clients to ingest data from an application. In these examples, {es} is running on Elastic Cloud, but the same principles apply to any {es} deployment.
+
+[TIP]
+====
+If you're interested in data ingestion pipelines for timestamped data, use the decision tree in the {cloud}/ec-cloud-ingest-data.html#ec-data-ingest-pipeline[Elastic Cloud docs] to understand your options.
+====
+
// New html page
[[search-analyze]]
-=== Search and analyze
+=== Search and analyze data
-While you can use {es} as a document store and retrieve documents and their
-metadata, the real power comes from being able to easily access the full suite
-of search capabilities built on the Apache Lucene search engine library.
+You can use {es} as a basic document store to retrieve documents and their
+metadata.
+However, the real power of {es} comes from its advanced search and analytics capabilities.
-{es} provides a simple, coherent REST API for managing your cluster and indexing
-and searching your data. For testing purposes, you can easily submit requests
-directly from the command line or through the Developer Console in {kib}. From
-your applications, you can use the
-https://www.elastic.co/guide/en/elasticsearch/client/index.html[{es} client]
-for your language of choice: Java, JavaScript, Go, .NET, PHP, Perl, Python
-or Ruby.
+You'll use a combination of an API endpoint and a query language to interact with your data.
[discrete]
-[[search-data]]
-==== Searching your data
-
-The {es} REST APIs support structured queries, full text queries, and complex
-queries that combine the two. Structured queries are
-similar to the types of queries you can construct in SQL. For example, you
-could search the `gender` and `age` fields in your `employee` index and sort the
-matches by the `hire_date` field. Full-text queries find all documents that
-match the query string and return them sorted by _relevance_—how good a
-match they are for your search terms.
-
-In addition to searching for individual terms, you can perform phrase searches,
-similarity searches, and prefix searches, and get autocomplete suggestions.
-
-Have geospatial or other numerical data that you want to search? {es} indexes
-non-textual data in optimized data structures that support
-high-performance geo and numerical queries.
-
-You can access all of these search capabilities using {es}'s
-comprehensive JSON-style query language (<>). You can also
-construct <> to search and aggregate data
-natively inside {es}, and JDBC and ODBC drivers enable a broad range of
-third-party applications to interact with {es} via SQL.
+[[search-analyze-rest-api]]
+==== REST API
+
+Use REST APIs to manage your {es} cluster, and to index
+and search your data.
+For testing purposes, you can submit requests
+directly from the command line or through the Dev Tools {kibana-ref}/console-kibana.html[Console] in {kib}.
+From your applications, you can use a
+https://www.elastic.co/guide/en/elasticsearch/client/index.html[client]
+in your programming language of choice.
+
+Refer to <> for a hands-on example of using the `_search` endpoint, adding data to {es}, and running basic searches in Query DSL syntax.
[discrete]
-[[analyze-data]]
-==== Analyzing your data
+[[search-analyze-query-languages]]
+==== Query languages
+
+{es} provides a number of query languages for interacting with your data.
+
+*Query DSL* is the primary query language for {es} today.
+
+*{esql}* is a new piped query language and compute engine which was first added in version *8.11*.
+
+{esql} does not yet support all the features of Query DSL, like full-text search and semantic search.
+Look forward to new {esql} features and functionalities in each release.
+
+Refer to <> for a full overview of the query languages available in {es}.
+
+[discrete]
+[[search-analyze-query-dsl]]
+===== Query DSL
+
+<> is a full-featured JSON-style query language that enables complex searching, filtering, and aggregations.
+It is the original and most powerful query language for {es} today.
+
+The <> accepts queries written in Query DSL syntax.
+
+[discrete]
+[[search-analyze-query-dsl-search-filter]]
+====== Search and filter with Query DSL
+
+Query DSL support a wide range of search techniques, including the following:
+
+* <>: Search text that has been analyzed and indexed to support phrase or proximity queries, fuzzy matches, and more.
+* <>: Search for exact matches using `keyword` fields.
+* <>: Search `semantic_text` fields using dense or sparse vector search on embeddings generated in your {es} cluster.
+* <>: Search for similar dense vectors using the kNN algorithm for embeddings generated outside of {es}.
+* <>: Search for locations and calculate spatial relationships using geospatial queries.
-{es} aggregations enable you to build complex summaries of your data and gain
-insight into key metrics, patterns, and trends. Instead of just finding the
-proverbial “needle in a haystack”, aggregations enable you to answer questions
-like:
+Learn about the full range of queries supported by <>.
-* How many needles are in the haystack?
-* What is the average length of the needles?
-* What is the median length of the needles, broken down by manufacturer?
-* How many needles were added to the haystack in each of the last six months?
+You can also filter data using Query DSL.
+Filters enable you to include or exclude documents by retrieving documents that match specific field-level criteria.
+A query that uses the `filter` parameter indicates <>.
-You can also use aggregations to answer more subtle questions, such as:
+[discrete]
+[[search-analyze-data-query-dsl]]
+====== Analyze with Query DSL
-* What are your most popular needle manufacturers?
-* Are there any unusual or anomalous clumps of needles?
+<> are the primary tool for analyzing {es} data using Query DSL.
+Aggregrations enable you to build complex summaries of your data and gain
+insight into key metrics, patterns, and trends.
-Because aggregations leverage the same data-structures used for search, they are
+Because aggregations leverage the same data structures used for search, they are
also very fast. This enables you to analyze and visualize your data in real time.
-Your reports and dashboards update as your data changes so you can take action
-based on the latest information.
+You can search documents, filter results, and perform analytics at the same time, on the same
+data, in a single request.
+That means aggregations are calculated in the context of the search query.
+
+The folowing aggregation types are available:
+
+* <>: Calculate metrics,
+such as a sum or average, from field values.
+* <>: Group documents into buckets based on field values, ranges,
+or other criteria.
+* <>: Run aggregations on the results of other aggregations.
+
+Run aggregations by specifying the <>'s `aggs` parameter.
+Learn more in <>.
+
+[discrete]
+[[search-analyze-data-esql]]
+===== {esql}
-What’s more, aggregations operate alongside search requests. You can search
-documents, filter results, and perform analytics at the same time, on the same
-data, in a single request. And because aggregations are calculated in the
-context of a particular search, you’re not just displaying a count of all
-size 70 needles, you’re displaying a count of the size 70 needles
-that match your users' search criteria--for example, all size 70 _non-stick
-embroidery_ needles.
+<> is a piped query language for filtering, transforming, and analyzing data.
+{esql} is built on top of a new compute engine, where search, aggregation, and transformation functions are
+directly executed within {es} itself.
+{esql} syntax can also be used within various {kib} tools.
+
+The <> accepts queries written in {esql} syntax.
+
+Today, it supports a subset of the features available in Query DSL, like aggregations, filters, and transformations.
+It does not yet support full-text search or semantic search.
+
+It comes with a comprehensive set of <> for working with data and has robust integration with {kib}'s Discover, dashboards and visualizations.
+
+Learn more in <>, or try https://www.elastic.co/training/introduction-to-esql[our training course].
+
+[discrete]
+[[search-analyze-data-query-languages-table]]
+==== List of available query languages
+The following table summarizes all available {es} query languages, to help you choose the right one for your use case.
+
+[cols="1,2,2,1", options="header"]
+|===
+| Name | Description | Use cases | API endpoint
+
+| <>
+| The primary query language for {es}. A powerful and flexible JSON-style language that enables complex queries.
+| Full-text search, semantic search, keyword search, filtering, aggregations, and more.
+| <>
+
+
+| <>
+| Introduced in *8.11*, the Elasticsearch Query Language ({esql}) is a piped query language language for filtering, transforming, and analyzing data.
+| Initially tailored towards working with time series data like logs and metrics.
+Robust integration with {kib} for querying, visualizing, and analyzing data.
+Does not yet support full-text search.
+| <>
+
+
+| <>
+| Event Query Language (EQL) is a query language for event-based time series data. Data must contain the `@timestamp` field to use EQL.
+| Designed for the threat hunting security use case.
+| <>
+
+| <>
+| Allows native, real-time SQL-like querying against {es} data. JDBC and ODBC drivers are available for integration with business intelligence (BI) tools.
+| Enables users familiar with SQL to query {es} data using familiar syntax for BI and reporting.
+| <>
+
+| {kibana-ref}/kuery-query.html[Kibana Query Language (KQL)]
+| Kibana Query Language (KQL) is a text-based query language for filtering data when you access it through the {kib} UI.
+| Use KQL to filter documents where a value for a field exists, matches a given value, or is within a given range.
+| N/A
+
+|===
+
+// New html page
+// TODO: this page won't live here long term
[[scalability]]
-=== Scalability and resilience
+=== Plan for production
{es} is built to be always available and to scale with your needs. It does this
by being distributed by nature. You can add servers (nodes) to a cluster to
diff --git a/docs/reference/landing-page.asciidoc b/docs/reference/landing-page.asciidoc
index e781dc0aff4e3..f1b5ce8210996 100644
--- a/docs/reference/landing-page.asciidoc
+++ b/docs/reference/landing-page.asciidoc
@@ -62,7 +62,7 @@
Elasticsearch is the search and analytics engine that powers the Elastic Stack.
diff --git a/docs/reference/quickstart/getting-started.asciidoc b/docs/reference/quickstart/getting-started.asciidoc
index 6b3095e07f9d4..e674dda147bcc 100644
--- a/docs/reference/quickstart/getting-started.asciidoc
+++ b/docs/reference/quickstart/getting-started.asciidoc
@@ -1,47 +1,20 @@
[[getting-started]]
-== Quick start guide
+== Quick start: Add data using Elasticsearch APIs
+++++
+Basics: Add data using APIs
+++++
-This guide helps you learn how to:
+In this quick start guide, you'll learn how to do the following tasks:
-* Run {es} and {kib} (using {ecloud} or in a local Docker dev environment),
-* add simple (non-timestamped) dataset to {es},
-* run basic searches.
-
-[TIP]
-====
-If you're interested in using {es} with Python, check out Elastic Search Labs. This is the best place to explore AI-powered search use cases, such as working with embeddings, vector search, and retrieval augmented generation (RAG).
-
-* https://www.elastic.co/search-labs/tutorials/search-tutorial/welcome[Tutorial]: this walks you through building a complete search solution with {es}, from the ground up.
-* https://github.com/elastic/elasticsearch-labs[`elasticsearch-labs` repository]: it contains a range of Python https://github.com/elastic/elasticsearch-labs/tree/main/notebooks[notebooks] and https://github.com/elastic/elasticsearch-labs/tree/main/example-apps[example apps].
-====
-
-[discrete]
-[[run-elasticsearch]]
-=== Run {es}
-
-The simplest way to set up {es} is to create a managed deployment with {ess} on
-{ecloud}. If you prefer to manage your own test environment, install and
-run {es} using Docker.
-
-include::{es-ref-dir}/tab-widgets/code.asciidoc[]
-include::{es-ref-dir}/tab-widgets/quick-start-install-widget.asciidoc[]
-
-[discrete]
-[[send-requests-to-elasticsearch]]
-=== Send requests to {es}
-
-You send data and other requests to {es} using REST APIs. This lets you interact
-with {es} using any client that sends HTTP requests, such as
-https://curl.se[curl]. You can also use {kib}'s Console to send requests to
-{es}.
-
-include::{es-ref-dir}/tab-widgets/api-call-widget.asciidoc[]
+* Add a small, non-timestamped dataset to {es} using Elasticsearch REST APIs.
+* Run basic searches.
[discrete]
[[add-data]]
=== Add data
-You add data to {es} as JSON objects called documents. {es} stores these
+You add data to {es} as JSON objects called documents.
+{es} stores these
documents in searchable indices.
[discrete]
@@ -58,6 +31,13 @@ The request automatically creates the index.
PUT books
----
// TESTSETUP
+
+[source,console]
+--------------------------------------------------
+DELETE books
+--------------------------------------------------
+// TEARDOWN
+
////
[source,console]
@@ -236,10 +216,11 @@ JSON object submitted during indexing.
[[qs-match-query]]
==== `match` query
-You can use the `match` query to search for documents that contain a specific value in a specific field.
+You can use the <> to search for documents that contain a specific value in a specific field.
This is the standard query for performing full-text search, including fuzzy matching and phrase searches.
Run the following command to search the `books` index for documents containing `brave` in the `name` field:
+
[source,console]
----
GET books/_search
@@ -251,34 +232,4 @@ GET books/_search
}
}
----
-// TEST[continued]
-
-[discrete]
-[[whats-next]]
-=== Next steps
-
-Now that {es} is up and running and you've learned the basics, you'll probably want to test out larger datasets, or index your own data.
-
-[discrete]
-[[whats-next-search-learn-more]]
-==== Learn more about search queries
-
-* <>. Jump here to learn about exact value search, full-text search, vector search, and more, using the <>.
-
-[discrete]
-[[whats-next-more-data]]
-==== Add more data
-
-* Learn how to {kibana-ref}/sample-data.html[install sample data] using {kib}. This is a quick way to test out {es} on larger workloads.
-* Learn how to use the {kibana-ref}/connect-to-elasticsearch.html#upload-data-kibana[upload data UI] in {kib} to add your own CSV, TSV, or JSON files.
-* Use the https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html[bulk API] to ingest your own datasets to {es}.
-
-[discrete]
-[[whats-next-client-libraries]]
-==== {es} programming language clients
-
-* Check out our https://www.elastic.co/guide/en/elasticsearch/client/index.html[client library] to work with your {es} instance in your preferred programming language.
-* If you're using Python, check out https://www.elastic.co/search-labs[Elastic Search Labs] for a range of examples that use the {es} Python client. This is the best place to explore AI-powered search use cases, such as working with embeddings, vector search, and retrieval augmented generation (RAG).
-** This extensive, hands-on https://www.elastic.co/search-labs/tutorials/search-tutorial/welcome[tutorial]
-walks you through building a complete search solution with {es}, from the ground up.
-** https://github.com/elastic/elasticsearch-labs[`elasticsearch-labs`] contains a range of executable Python https://github.com/elastic/elasticsearch-labs/tree/main/notebooks[notebooks] and https://github.com/elastic/elasticsearch-labs/tree/main/example-apps[example apps].
\ No newline at end of file
+// TEST[continued]
\ No newline at end of file
diff --git a/docs/reference/quickstart/index.asciidoc b/docs/reference/quickstart/index.asciidoc
index e517d039e620b..6bfed4c198c75 100644
--- a/docs/reference/quickstart/index.asciidoc
+++ b/docs/reference/quickstart/index.asciidoc
@@ -1,10 +1,29 @@
[[quickstart]]
-= Quickstart
+= Quick starts
-Get started quickly with {es}.
+Use these quick starts to get hands-on experience with the {es} APIs.
+Unless otherwise noted, these examples will use queries written in <> syntax.
-* Learn how to run {es} (and {kib}) for <>.
-* Follow our <> to add data to {es} and query it.
+[discrete]
+[[quickstart-requirements]]
+== Requirements
-include::run-elasticsearch-locally.asciidoc[]
-include::getting-started.asciidoc[]
+You'll need a running {es} cluster, together with {kib} to use the Dev Tools API Console.
+Get started <> , or see our <>.
+
+[discrete]
+[[quickstart-list]]
+== Hands-on quick starts
+
+* <>. Learn how to add data to {es} and perform basic searches.
+
+[discrete]
+[[quickstart-python-links]]
+== Working in Python
+
+If you're interested in using {es} with Python, check out Elastic Search Labs:
+
+* https://github.com/elastic/elasticsearch-labs[`elasticsearch-labs` repository]: Contains a range of Python https://github.com/elastic/elasticsearch-labs/tree/main/notebooks[notebooks] and https://github.com/elastic/elasticsearch-labs/tree/main/example-apps[example apps].
+* https://www.elastic.co/search-labs/tutorials/search-tutorial/welcome[Tutorial]: This walks you through building a complete search solution with {es} from the ground up using Flask.
+
+include::getting-started.asciidoc[]
\ No newline at end of file
diff --git a/docs/reference/quickstart/run-elasticsearch-locally.asciidoc b/docs/reference/run-elasticsearch-locally.asciidoc
similarity index 68%
rename from docs/reference/quickstart/run-elasticsearch-locally.asciidoc
rename to docs/reference/run-elasticsearch-locally.asciidoc
index 24e0f3f22350e..64bcd3d066529 100644
--- a/docs/reference/quickstart/run-elasticsearch-locally.asciidoc
+++ b/docs/reference/run-elasticsearch-locally.asciidoc
@@ -1,7 +1,7 @@
[[run-elasticsearch-locally]]
-== Run {es} locally in Docker (without security)
+== Run {es} locally in Docker
++++
-Local dev setup (Docker)
+Run {es} locally
++++
[WARNING]
@@ -9,24 +9,13 @@
*DO NOT USE THESE INSTRUCTIONS FOR PRODUCTION DEPLOYMENTS*
The instructions on this page are for *local development only*. Do not use these instructions for production deployments, because they are not secure.
-While this approach is convenient for experimenting and learning, you should never run the service in this way in a production environment.
+While this approach is convenient for experimenting and learning, you should never run Elasticsearch in this way in a production environment.
====
-The following commands help you very quickly spin up a single-node {es} cluster, together with {kib} in Docker.
-Note that if you don't need the {kib} UI, you can skip those instructions.
+Follow this tutorial if you want to quickly set up {es} in Docker for local development or testing.
-[discrete]
-[[local-dev-why]]
-=== When would I use this setup?
-
-Use this setup if you want to quickly spin up {es} (and {kib}) for local development or testing.
-
-For example you might:
-
-* Want to run a quick test to see how a feature works.
-* Follow a tutorial or guide that requires an {es} cluster, like our <>.
-* Experiment with the {es} APIs using different tools, like the Dev Tools Console, cURL, or an Elastic programming language client.
-* Quickly spin up an {es} cluster to test an executable https://github.com/elastic/elasticsearch-labs/tree/main/notebooks#readme[Python notebook] locally.
+This tutorial also includes instructions for installing {kib}.
+ If you don't need access to the {kib} UI, then you can skip those instructions.
[discrete]
[[local-dev-prerequisites]]
@@ -118,12 +107,12 @@ When you access {kib}, use `elastic` as the username and the password you set ea
[NOTE]
====
-The service is started with a trial license. The trial license enables all features of Elasticsearch for a trial period of 30 days. After the trial period expires, the license is downgraded to a basic license, which is free forever. If you prefer to skip the trial and use the basic license, set the value of the `xpack.license.self_generated.type` variable to basic instead. For a detailed feature comparison between the different licenses, refer to our https://www.elastic.co/subscriptions[subscriptions page].
+The service is started with a trial license. The trial license enables all features of Elasticsearch for a trial period of 30 days. After the trial period expires, the license is downgraded to a basic license, which is free forever.
====
[discrete]
[[local-dev-connecting-clients]]
-== Connecting to {es} with language clients
+=== Connect to {es} with language clients
To connect to the {es} cluster from a language client, you can use basic authentication with the `elastic` username and the password you set in the environment variable.
@@ -172,12 +161,11 @@ curl -u elastic:$ELASTIC_PASSWORD \
[[local-dev-next-steps]]
=== Next steps
-Use our <> to learn the basics of {es}: how to add data and query it.
+Use our <> to learn the basics of {es}.
[discrete]
[[local-dev-production]]
=== Moving to production
-This setup is not suitable for production use. For production deployments, we recommend using our managed service on Elastic Cloud. https://cloud.elastic.co/registration[Sign up for a free trial] (no credit card required).
-
-Otherwise, refer to https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html[Install {es}] to learn about the various options for installing {es} in a self-managed production environment, including using Docker.
+This setup is not suitable for production use.
+Refer to <> for more information.
\ No newline at end of file
diff --git a/docs/reference/setup.asciidoc b/docs/reference/setup.asciidoc
index b346fddc5e5a1..a284e563917c3 100644
--- a/docs/reference/setup.asciidoc
+++ b/docs/reference/setup.asciidoc
@@ -27,6 +27,8 @@ the only resource-intensive application on the host or container. For example,
you might run {metricbeat} alongside {es} for cluster statistics, but a
resource-heavy {ls} deployment should be on its own host.
+include::run-elasticsearch-locally.asciidoc[]
+
include::setup/install.asciidoc[]
include::setup/configuration.asciidoc[]
diff --git a/docs/reference/tab-widgets/api-call.asciidoc b/docs/reference/tab-widgets/api-call.asciidoc
index bb6b89374075d..5e70d73684436 100644
--- a/docs/reference/tab-widgets/api-call.asciidoc
+++ b/docs/reference/tab-widgets/api-call.asciidoc
@@ -1,5 +1,5 @@
// tag::cloud[]
-**Use {kib}**
+**Option 1: Use {kib}**
//tag::kibana-api-ex[]
. Open {kib}'s main menu ("*☰*" near Elastic logo) and go to **Dev Tools > Console**.
@@ -16,9 +16,9 @@ GET /
//end::kibana-api-ex[]
-**Use curl**
+**Option 2: Use `curl`**
-To communicate with {es} using curl or another client, you need your cluster's
+To communicate with {es} using `curl` or another client, you need your cluster's
endpoint.
. Open {kib}'s main menu and click **Manage this deployment**.
@@ -26,7 +26,7 @@ endpoint.
. From your deployment menu, go to the **Elasticsearch** page. Click **Copy
endpoint**.
-. To submit an example API request, run the following curl command in a new
+. To submit an example API request, run the following `curl` command in a new
terminal session. Replace `` with the password for the `elastic` user.
Replace `` with your endpoint.
+