Conversation
| import Data.Rewriting.Rule (Rule (..)) | ||
| import qualified Data.Rewriting.Rule as Rule | ||
|
|
||
| import Control.Applicative |
There was a problem hiding this comment.
Does this line serve any purpose?
|
I believe this function is a bit too simple to be useful; for example, for terms without normal forms, it will not terminate. In general, an unrestricted search seems to be a bad idea, but it's hard to predict the tuning knobs that users will need. A more modular approach could look like this: import Control.Applicative
import Data.Functor.Identity
import Data.Rewriting.Rules.Rewrite
-- one step reducts
results :: Strategy f v v' -> Term f v -> [Term f v]
results s = fmap result . s
-- Generic breadth first enumeration of normal forms on the ARS level.
--
-- The function `f` can do things like elimination of duplicates,
-- prioritization or limiting the width (cf. beam search),
-- while the monad allows things like keeping track fo all
-- terms seen so far.
normalFormsBFM :: Monad m => (a -> m [a]) -> ([a] -> m [a]) -> a -> m [[a]]
normalFormsBFM s f t = go [t] where
go ts = do
ts' <- mapM s ts
(:) [t | (t, []) <- zip ts ts'] <$> (f (concat ts') >>= go)
-- pure version
normalFormsBF :: (a -> [a]) -> ([a] -> [a]) -> a -> [[a]]
normalFormsBF s f = runIdentity . bfsM (Identity . s) (Identity . f)
-- the proposed `normalForms` could be implemented as
normalForms :: Strategy f v v' -> Term f v -> [Term f v]
normalForms s = concat . normalFormsBF (results s) id
-- but we can also easily do stuff like
-- let nub' = Set.toList . Set.fromList in
-- let size = Term.fold (\v -> 1) (\f xs -> 1 + sum xs) in
-- concat . take 10 . nfs (results s) (filter (\t -> size t <= 10) . nub'))
-- to limit the depth and size of intermediate terms to 10, and avoids
-- many duplicates.Perhaps Of course that would raise the question of where to put the |
|
another source for inspiration: https://hackage.haskell.org/package/search-algorithms-0.2.0/docs/Algorithm-Search.html |
A function for computing the normal forms of a given term w.r.t. a given strategy.
Question: should we avoid computing duplicate NFs? (In experiments this sometimes has a huge positive impact on the running time.) Or stay with the clearer (?) implementation?