[148880] in cryptography@c2.net mail archive

home help back first fref pref prev next nref lref last post

Re: [Cryptography] Strict memory hard hash functions

daemon@ATHENA.MIT.EDU (Sergio Lerner)
Thu Jan 2 12:06:46 2014

X-Original-To: cryptography@metzdowd.com
Date: Thu, 02 Jan 2014 13:56:08 -0300
From: Sergio Lerner <sergiolerner@pentatek.com>
CC: "cryptography@metzdowd.com" <cryptography@metzdowd.com>
In-Reply-To: <112446395.20131231225253@gmail.com>
Errors-To: cryptography-bounces+crypto.discuss=bloom-picayune.mit.edu@metzdowd.com

This is a multi-part message in MIME format.
--===============0580997895476685981==
Content-Type: multipart/alternative;
 boundary="------------070108050603090501050901"

This is a multi-part message in MIME format.
--------------070108050603090501050901
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 8bit

On 31/12/2013 06:52 p.m., Krisztián Pintér wrote:
> Sergio Lerner (at Tuesday, December 31, 2013, 9:35:23 PM):
>> I've been playing with a property I named Strict memory hard hash
>> functions.  Strict memory hard functions are an extension of memory hard
>> http://bitslog.files.wordpress.com/2013/12/memohash-v0-1.pdf
>
> but my question is: you are aware of the ongoing password hashing
> competition www.password-hashing.net , aren't you?
Yes, but I'm unsure if SeqMemoHash can compete with scrypt in password
hashing: password hashing does not require a hard (theoretical) limit in
the memory required, just an economic incentive not to use less memory.
So SeqMemoHash will be slower than any "weakly mem limited" competitor.
Nevertheless, I've not done performance comparisons side-by-side, and I
will.

>
> i see one downside though. if i understand correctly, you need N^2
> time if we want N memory blocks (blocks being a reasonable output size
> of some PRF). 
No, you don't actually need N^2 time. As far I can see, since the amount
of temporary memory increases by each "backtrack", the amount of
"backtracks" each round has to do increases at each depth. So with only
35 rounds (for any value of N>35) , the first round (last round in the
backtrack) will be evaluated 35! times, which is equivalent to 128-bit
security. So for N>35 the amount of time grows linearly with the amount
of memory, since no more than 35 rounds are needed.

> it either limits us severely in memory use if we aim for
> "regular" block sizes like 256 to 512 bits, or requires huge block
> size. 
As I said previously, the compression function internal state size does
not need to grow (in SeqMemoHash). In TreeMemoHash I think it's true,
but it could be improved.
> the latter approach seems to ease the problem to some degree.
> can we consider like 65536 bit or higher block sizes? does that hurt
> security? maybe we need to consider the internals of such a huge PRF?
Maybe the Keccak sponge design can be securely used for any desired
state memory size (the parameter /b/, in the Keccak design).

Best regards, Sergio.

--------------070108050603090501050901
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">On 31/12/2013 06:52 p.m., Kriszti&aacute;n
      Pint&eacute;r wrote:<br>
    </div>
    <blockquote cite="mid:112446395.20131231225253@gmail.com"
      type="cite">
      <pre wrap="">
Sergio Lerner (at Tuesday, December 31, 2013, 9:35:23 PM):
</pre>
      <blockquote type="cite">
        <pre wrap="">I've been playing with a property I named Strict memory hard hash
functions.  Strict memory hard functions are an extension of memory hard
<a class="moz-txt-link-freetext" href="http://bitslog.files.wordpress.com/2013/12/memohash-v0-1.pdf">http://bitslog.files.wordpress.com/2013/12/memohash-v0-1.pdf</a>
</pre>
      </blockquote>
      <pre wrap="">

but my question is: you are aware of the ongoing password hashing
competition <a class="moz-txt-link-abbreviated" href="http://www.password-hashing.net">www.password-hashing.net</a> , aren't you?</pre>
    </blockquote>
    Yes, but I'm unsure if SeqMemoHash can compete with scrypt in
    password hashing: password hashing does not require a hard
    (theoretical) limit in the memory required, just an economic
    incentive not to use less memory. So SeqMemoHash will be slower than
    any "weakly mem limited" competitor. Nevertheless, I've not done
    performance comparisons side-by-side, and I will.<br>
    <br>
    <blockquote cite="mid:112446395.20131231225253@gmail.com"
      type="cite">
      <pre wrap="">

i see one downside though. if i understand correctly, you need N^2
time if we want N memory blocks (blocks being a reasonable output size
of some PRF). </pre>
    </blockquote>
    No, you don't actually need N^2 time. As far I can see, since the
    amount of temporary memory increases by each "backtrack", the amount
    of "backtracks" each round has to do increases at each depth. So
    with only 35 rounds (for any value of N&gt;35) , the first round
    (last round in the backtrack) will be evaluated 35! times, which is
    equivalent to 128-bit security. So for N&gt;35 the amount of time
    grows linearly with the amount of memory, since no more than 35
    rounds are needed.<br>
    <br>
    <blockquote cite="mid:112446395.20131231225253@gmail.com"
      type="cite">
      <pre wrap="">it either limits us severely in memory use if we aim for
"regular" block sizes like 256 to 512 bits, or requires huge block
size. </pre>
    </blockquote>
    As I said previously, the compression function internal state size
    does not need to grow (in SeqMemoHash). In TreeMemoHash I think it's
    true, but it could be improved.<br>
    <blockquote cite="mid:112446395.20131231225253@gmail.com"
      type="cite">
      <pre wrap="">the latter approach seems to ease the problem to some degree.
can we consider like 65536 bit or higher block sizes? does that hurt
security? maybe we need to consider the internals of such a huge PRF?
</pre>
    </blockquote>
    Maybe the Keccak sponge design can be securely used for any desired
    state memory size (the parameter <i>b</i>, in the Keccak design).<br>
    <br>
    Best regards, Sergio.<br>
  </body>
</html>

--------------070108050603090501050901--


--===============0580997895476685981==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography
--===============0580997895476685981==--


home help back first fref pref prev next nref lref last post