Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Propogate parse-confidence across relations #164

Open
amebel opened this issue Aug 21, 2014 · 3 comments
Open

Propogate parse-confidence across relations #164

amebel opened this issue Aug 21, 2014 · 3 comments

Comments

@amebel
Copy link
Contributor

amebel commented Aug 21, 2014

Presently the confidence is only set for the ParseNodes and not for the relations. This was pointed out @ https://groups.google.com/d/msg/opencog/8I4oBE2dOJc/GG0lqRmmKrwJ and #154 (comment)

How should with stv for unary & binary relations? @bgoertzel @williampma @ruiting @rodsol @linas

@linas
Copy link
Member

linas commented Aug 23, 2014

I dunno. We are back to the question of "context". So, for a given
parse, we are 100% sure/confident that those relations are correct. So, the
ideal pipeline would take these with a confidence 100%, perform some
reasoning based on previous sentences, and based on common-sense, and
determine whether the sentence is 'consistent' with the known facts. If it
is, then the parse confidence of that parse can be strengthened. If it is
not, then the parse-confidence should be lowered. The process is
continued for each parse, until only one (or maybe two) parses have almost
all of the confidence, and the remaining ones are seem very unilkely. The
unlikely ones are then deleted, and the dominent one is then folded into
the main knowledgebase.

However, this kind of process requires that each parse be 'isolated' into
its own hypothetical universe that does not 'leak' information into the
main knowledebase. I don't think we have any wiki-page that demonstrates
how to do such isolated reasoning. Its also not clear how, after doing
such reasoning, how we should determine a single final score to indicate
whether that parse is crazy or not.

On 21 August 2014 00:52, Amen Belayneh [email protected] wrote:

Presently the confidence is only set for the ParseNodes and not for the
relations. This was pointed out @
https://groups.google.com/d/msg/opencog/8I4oBE2dOJc/GG0lqRmmKrwJ and #154
(comment)
#154 (comment)

How should with stv for unary & binary relations? @bgoertzel
https://github.com/bgoertzel @williampma https://github.com/williampma
@ruiting https://github.com/ruiting @rodsol https://github.com/rodsol
@linas https://github.com/linas


Reply to this email directly or view it on GitHub
#164.

@linas
Copy link
Member

linas commented Aug 23, 2014

In case my answer was too long: we should NOT propagate the confidence into
the relations. The relations need to stay at 100% in that context.

On 22 August 2014 22:09, Linas Vepstas [email protected] wrote:

I dunno. We are back to the question of "context". So, for a given
parse, we are 100% sure/confident that those relations are correct. So, the
ideal pipeline would take these with a confidence 100%, perform some
reasoning based on previous sentences, and based on common-sense, and
determine whether the sentence is 'consistent' with the known facts. If it
is, then the parse confidence of that parse can be strengthened. If it is
not, then the parse-confidence should be lowered. The process is
continued for each parse, until only one (or maybe two) parses have almost
all of the confidence, and the remaining ones are seem very unilkely. The
unlikely ones are then deleted, and the dominent one is then folded into
the main knowledgebase.

However, this kind of process requires that each parse be 'isolated' into
its own hypothetical universe that does not 'leak' information into the
main knowledebase. I don't think we have any wiki-page that demonstrates
how to do such isolated reasoning. Its also not clear how, after doing
such reasoning, how we should determine a single final score to indicate
whether that parse is crazy or not.

On 21 August 2014 00:52, Amen Belayneh [email protected] wrote:

Presently the confidence is only set for the ParseNodes and not for the
relations. This was pointed out @
https://groups.google.com/d/msg/opencog/8I4oBE2dOJc/GG0lqRmmKrwJ and #154
(comment)
#154 (comment)

How should with stv for unary & binary relations? @bgoertzel
https://github.com/bgoertzel @williampma
https://github.com/williampma @ruiting https://github.com/ruiting
@rodsol https://github.com/rodsol @linas https://github.com/linas


Reply to this email directly or view it on GitHub
#164.

@bgoertzel
Copy link

In theory, PLN should be able to do reasoning on Atoms embedded in a
ContextLink, with the "leakage" into the rest of the ATomspace determined
by the node probability of the node serving as context...

E.g.

ContextLink <1>
A
Inheritance B C

ContextLink <1>
A
Inheritance C D

should yield

ContextLink <1>
A
Inheritance B C <1>

...

regardless of whether in the overall Atomspace we have

InheritanceLink A B <.001>

InheritanceLink B C <.02>

etc.

If we have

A <.0001>

then the degree of "leakage" of

ContextLink <1>
A
Inheritance B C <1>

into the overall Atomspace should be very little...

If we have

A <.5>

then the leakage would be a lot, because the node probability of A
is indicating implicitly that 50% of the observations recorded
in the Atomspace are observations of instances of A ...

-- Ben

On Sat, Aug 23, 2014 at 11:10 AM, Linas Vepstas [email protected]
wrote:

I dunno. We are back to the question of "context". So, for a given
parse, we are 100% sure/confident that those relations are correct. So,
the
ideal pipeline would take these with a confidence 100%, perform some
reasoning based on previous sentences, and based on common-sense, and
determine whether the sentence is 'consistent' with the known facts. If it
is, then the parse confidence of that parse can be strengthened. If it is
not, then the parse-confidence should be lowered. The process is
continued for each parse, until only one (or maybe two) parses have almost
all of the confidence, and the remaining ones are seem very unilkely. The
unlikely ones are then deleted, and the dominent one is then folded into
the main knowledgebase.

However, this kind of process requires that each parse be 'isolated' into
its own hypothetical universe that does not 'leak' information into the
main knowledebase. I don't think we have any wiki-page that demonstrates
how to do such isolated reasoning. Its also not clear how, after doing
such reasoning, how we should determine a single final score to indicate
whether that parse is crazy or not.

On 21 August 2014 00:52, Amen Belayneh [email protected] wrote:

Presently the confidence is only set for the ParseNodes and not for the
relations. This was pointed out @
https://groups.google.com/d/msg/opencog/8I4oBE2dOJc/GG0lqRmmKrwJ and
#154
(comment)
#154 (comment)

How should with stv for unary & binary relations? @bgoertzel
https://github.com/bgoertzel @williampma <
https://github.com/williampma>
@ruiting https://github.com/ruiting @rodsol https://github.com/rodsol

@linas https://github.com/linas


Reply to this email directly or view it on GitHub
#164.


Reply to this email directly or view it on GitHub
#164 (comment).

Ben Goertzel, PhD
http://goertzel.org

"In an insane world, the sane man must appear to be insane". -- Capt. James
T. Kirk

"Emancipate yourself from mental slavery / None but ourselves can free our
minds" -- Robert Nesta Marley

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants