This is the elixir implementation of Algolia search API, it is purely functional
Add to your dependencies
defp deps do
[{:algolia, "~> 0.8.0"}]
end
(Pre-Elixir-1.4) Add :algolia to your applications
def application do
[applications: [:algolia]]
end
ALGOLIA_APPLICATION_ID=YOUR_APPLICATION_ID
ALGOLIA_API_KEY=YOUR_API_KEY
config :algolia,
application_id: YOUR_APPLICATION_ID,
api_key: YOUR_API_KEY
NOTE: You must use ADMIN API_KEY instead of SEARCH API_KEY to enable write access
You don't need to initiate an index with this client unlike other OO Algolia clients. However, Most of the client search/write functions all use the syntax
operation(index, args....)
So you can easy emulate the index.function() syntax using piping
"my_index" |> operation(args)
All functions are serialized into maps before returning these responses
{:ok, response}
{:error, error_code, response}
{:error, "Cannot connect to Algolia"}
: The client implements retry strategy on all Algolia hosts with increasing timeout, It should only return this error when it has tried all 4 hosts. More Details here.
"my_index" |> search("some query")
With Options
"my_index" |> search("some query", [attributesToRetrieve: "firstname", hitsPerPage: 20])
See all available search options here
multi([%{index_name => "my_index1", query: "search query"},
%{index_name => "my_index2", query: "another query", hitsPerPage: 3,},
%{index_name => "my_index3", query: "3rd query", tagFilters: "promotion"}])
You can specify a strategy to optimize your multiple queries
:none
: Execute the sequence of queries until the end.stop_if_enough_matches
: Execute the sequence of queries until the number of hits is reached by the sum of hits.
multi([query1, query2], strategy: :stop_if_enough_matches)
All save_*
operations will overrides the object at the objectID
Save a single object to index without specifying objectID, must have objectID
inside object, or use the id_attribute
option (see below)
"my_index" |> save_object(%{objectID: "1"})
Save a single object with a given objectID
"my_index" |> save_object(%{title: "hello"}, "12345")
Save multiple objects to an index
"my_index" |> save_objects([%{objectID: "1"}, %{objectID: "2"}])
Partially updates a single object
"my_index" |> partial_update_object(%{title: "hello"}, "12345")
Update multiple objects, must have objectID in each object, or use the id_attribute
option (see below)
"my_index" |> partial_update_objects([%{objectID: "1"}, %{objectID: "2"}])
Partial update by default creates a new object if an object does not exist at the
objectID, you can turn this off by passing false
to the :upsert?
option
"my_index" |> partial_update_object(%{title: "hello"}, "12345", upsert?: false)
"my_index" |> partial_update_objects([%{id: "1"}, %{id: "2"}], id_attribute: :id, upsert?: false)
All write functions such as save_object
and partial_update_object
comes with an id_attribute
option that lets the you specifying an objectID from an existing field in the object, so you do not
have to generate it yourself
"my_index" |> save_object(%{id: "2"}, id_attribute: :id)
It also works for batch operations, such as save_objects
and partial_update_objects
"my_index" |> save_objects([%{id: "1"}, %{id: "2"}], id_attribute: :id)
However, this cannot be used together with an ID specifying argument together
"my_index" |> save_object(%{id: "1234"}, "1234", id_attribute: :id)
> Error
All write operations can be waited on by simply piping the response into wait/1
"my_index" |> save_object(%{id: "123"}) |> wait
Since the client polls the server to check for publishing status, You can specify a time between each tick of the poll, the default is 1000 ms
"my_index" |> save_object(%{id: "123"}) |> wait(2_000)
You can also use the underlying wait_task function explicitly
{:ok, %{"taskID" => task_id, "indexName" => index}}
= "my_index" |> save_object(%{id: "123"}
wait(index, task_id)
or with option
wait(index, task_id, 2_000)
list_indexes()
Moves an index to a new one
move_index(source_index, destination_index)
Copies an index to a new one
copy_index(source_index, destination_index)
clear_index(index)
get_settings(index)
Example response
{:ok,
%{"minWordSizefor1Typo" => 4,
"minWordSizefor2Typos" => 8,
"hitsPerPage" => 20,
"attributesToIndex" => nil,
"attributesToRetrieve" => nil,
"attributesToSnippet" => nil,
"attributesToHighlight" => nil,
"ranking" => [
"typo",
"geo",
"words",
"proximity",
"attribute",
"exact",
"custom"
],
"customRanking" => nil,
"separatorsToIndex" => "",
"queryType" => "prefixAll"}
}
set_settings(index, %{"hitsPerPage" => 20})
> %{"updatedAt" => "2013-08-21T13:20:18.960Z",
"taskID" => 10210332.
"indexName" => "my_index"}
- get_object
- save_object
- save_objects
- update_object
- partial_update_object
- partial_update_objects
- delete_object
- delete_objects
- list_indexes
- clear_index
- wait_task
- wait (convenience function for piping response into wait_task)
- set_settings
- get_settings
- list_user_keys
- get_user_key
- add_user_key
- update_user_key
- delete_user_key
Use asdf and install the following versions.
Erlang
- 25.2.1
- 26.1.1
Elixir
- 1.14.4-otp-25
- 1.15.6-otp-26
rm -rf _build/test
ASDF_ERLANG_VERSION=26.1.1 \
ASDF_ELIXIR_VERSION=1.15.6-otp-26 \
ALGOLIA_APPLICATION_ID=$(op item get --vault "Font Awesome" "Algolia for isolated testing" --field "application id") \
ALGOLIA_API_KEY=$(op item get --vault "Font Awesome" "Algolia for isolated testing" --field "admin api key") \
mix test
rm -rf _build/test
ASDF_ERLANG_VERSION=25.2.1 \
ASDF_ELIXIR_VERSION=1.14.4-otp-25 \
ALGOLIA_APPLICATION_ID=$(op item get --vault "Font Awesome" "Algolia for isolated testing" --field "application id") \
ALGOLIA_API_KEY=$(op item get --vault "Font Awesome" "Algolia for isolated testing" --field "admin api key") \
mix test